WO2024047696A1 - Information processing system, information processing method, and program - Google Patents
Information processing system, information processing method, and program Download PDFInfo
- Publication number
- WO2024047696A1 WO2024047696A1 PCT/JP2022/032390 JP2022032390W WO2024047696A1 WO 2024047696 A1 WO2024047696 A1 WO 2024047696A1 JP 2022032390 W JP2022032390 W JP 2022032390W WO 2024047696 A1 WO2024047696 A1 WO 2024047696A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- information processing
- processing system
- video
- annotation
- Prior art date
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 83
- 238000003672 processing method Methods 0.000 title claims description 15
- 238000000034 method Methods 0.000 claims description 13
- 238000011156 evaluation Methods 0.000 claims description 3
- 238000001356 surgical procedure Methods 0.000 abstract description 8
- 238000004891 communication Methods 0.000 description 25
- 238000010586 diagram Methods 0.000 description 17
- 230000000694 effects Effects 0.000 description 12
- 230000006870 function Effects 0.000 description 10
- 210000000056 organ Anatomy 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 210000000496 pancreas Anatomy 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- APTZNLHMIGJTEW-UHFFFAOYSA-N pyraflufen-ethyl Chemical compound C1=C(Cl)C(OCC(=O)OCC)=CC(C=2C(=C(OC(F)F)N(C)N=2)Cl)=C1F APTZNLHMIGJTEW-UHFFFAOYSA-N 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/02—Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/08—Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
- G09B5/14—Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations with provision for individual teacher-student communication
Definitions
- the present invention relates to an information processing system, an information processing method, and a program.
- Patent Document 1 discloses a technique aimed at improving communication between surgeons.
- Patent Document 1 The technology disclosed in Patent Document 1 is configured so that annotations can be written to specific parts of a video. However, it has been found that inexperienced users (physicians) may not be able to understand the instructor's ideas clearly.
- the present invention aims to provide an information processing system etc. that can efficiently provide medical education.
- an information processing system that supports medical education.
- This information processing system includes a control section.
- the control unit is configured to perform the following steps.
- a surgical operation video is received from the first user.
- the display control step the operation video is displayed as a shared video so that the first user and a second user different from the first user can view it.
- the shared video is configured to allow the first user and/or the second user to write an annotation, and on the screen that displays the shared video, the first user and/or The second user is configured to be able to add a text comment.
- FIG. 1 is a configuration diagram showing an information processing system 1 according to the present embodiment.
- 2 is a block diagram showing the hardware configuration of a first user terminal 2.
- FIG. 3 is a block diagram showing the hardware configuration of a server 3.
- FIG. 2 is a block diagram showing the hardware configuration of a second user terminal 4.
- FIG. 2 is a block diagram showing functions realized by a control unit 33 and the like in the server 3.
- FIG. 2 is an activity diagram showing the flow of information processing according to the present embodiment.
- FIG. 2 is a schematic diagram for explaining a shared video. This is an example of a screen displaying search results shown on the first user terminal 2.
- An information processing system that supports medical education, Equipped with a control unit, The control unit is configured to execute the following steps, In the video reception step, a surgical operation video is accepted from the first user, In the display control step, the operation video is displayed so as to be visible to the first user and a second user different from the first user as a shared video;
- the shared video is configured to allow the first user and/or the second user to write an annotation
- the information processing system is configured to be able to add a text comment of the first user and/or the second user that is linked to the annotation on a screen on which the shared video is displayed.
- the program for realizing the software appearing in this embodiment may be provided as a non-transitory computer-readable medium, or may be downloaded from an external server.
- the program may be provided in a manner that allows the program to be started on an external computer and the function thereof is realized on the client terminal (so-called cloud computing).
- the term "unit” may include, for example, a combination of hardware resources implemented by circuits in a broad sense and software information processing that can be concretely implemented by these hardware resources.
- various types of information are handled in this embodiment, and these information include, for example, the physical value of a signal value representing voltage and current, and the signal value as a binary bit collection consisting of 0 or 1. It is expressed by high and low levels or quantum superposition (so-called quantum bits), and communication and calculations can be performed on circuits in a broad sense.
- a circuit in a broad sense is a circuit realized by at least appropriately combining a circuit, a circuit, a processor, a memory, and the like.
- ASIC Application Specific Integrated Circuit
- SPLD Simple Programmable Logic Device
- CPLD Complex Programmable Logic Device
- FPGA field programmable gate array
- FIG. 1 is a configuration diagram showing an information processing system 1 according to the present embodiment.
- the information processing system 1 is an information processing system that supports medical education.
- the information processing system 1 includes a first user terminal 2, a server 3, and a second user terminal 4, which are connected through a network 5. These components will be further explained. Note that a system exemplified by the information processing system 1 is composed of one or more devices or components. Therefore, even the server 3 alone is an example of a system.
- the first user terminal 2 is typically a terminal owned by a user receiving operational guidance.
- FIG. 2 is a block diagram showing the hardware configuration of the first user terminal 2. As shown in FIG. The first user terminal 2 has a communication section 21, a storage section 22, a control section 23, a display section 24, an input section 25, and an audio output section 26, and these components are connected to the first user terminal 2. It is electrically connected within the user terminal 2 via a communication bus 20 . Descriptions of the communication unit 21, storage unit 22, and control unit 23 will be omitted because they are substantially the same as the communication unit 31, storage unit 32, and control unit 33 in the server 3, which will be described later.
- the display unit 24 may be included in the casing of the first user terminal 2, or may be attached externally.
- the display unit 24 displays a screen of a graphical user interface (GUI) that can be operated by the user.
- GUI graphical user interface
- This is preferably implemented by using display devices such as a CRT display, a liquid crystal display, an organic EL display, and a plasma display depending on the type of the first user terminal 2, for example.
- display unit 24 will be described as being included in the casing of the first user terminal 2.
- the input unit 25 may be included in the casing of the first user terminal 2, or may be externally attached.
- the input unit 25 may be integrated with the display unit 24 and implemented as a touch panel. With a touch panel, the user can input tap operations, swipe operations, and the like. Of course, a switch button, a mouse, a QWERTY keyboard, etc. may be used instead of the touch panel. That is, the input unit 25 receives operation inputs made by the user. The input is transferred as a command signal to the control unit 23 via the communication bus 20, and the control unit 23 can perform predetermined control or calculation as necessary.
- the audio output unit 26 may be included in the housing of the first user terminal 2, or may be externally attached.
- the audio output unit 26 outputs audio that can be recognized by the user.
- the audio output unit 26 may be a non-directional speaker, a directional speaker, or both.
- the audio output unit 26 will be described as being included in the casing of the first user terminal 2.
- FIG. 3 is a block diagram showing the hardware configuration of the server 3. As shown in FIG.
- the server 3 includes a communication section 31, a storage section 32, and a control section 33, and these components are electrically connected via a communication bus 30 inside the server 3. Each component will be further explained.
- the communication unit 31 is preferably a wired communication means such as USB, IEEE1394, Thunderbolt (registered trademark), wired LAN network communication, etc., it is also suitable for wireless LAN network communication, mobile communication such as 3G/LTE/5G, Bluetooth (registered trademark) Communication etc. may be included as necessary. That is, it is more preferable to implement it as a set of these plurality of communication means. That is, the server 3 communicates various information with the first user terminal 2 and the second user terminal 4 via the network 5 via the communication unit 31 .
- wired communication means such as USB, IEEE1394, Thunderbolt (registered trademark), wired LAN network communication, etc.
- mobile communication such as 3G/LTE/5G, Bluetooth (registered trademark) Communication etc.
- the server 3 communicates various information with the first user terminal 2 and the second user terminal 4 via the network 5 via the communication unit 31 .
- the storage unit 32 stores various information defined by the above description. This may be used, for example, as a storage device such as a solid state drive (SSD) that stores various programs related to the server 3 executed by the control unit 33, or as a temporary storage device related to program calculations. It can be implemented as a memory such as a random access memory (RAM) that stores information (arguments, arrays, etc.). Alternatively, a combination of these may be used. In particular, the storage unit 32 stores various programs related to the server 3 that are executed by the control unit 33.
- SSD solid state drive
- RAM random access memory
- the control unit 33 processes and controls the overall operation related to the server 3.
- the control unit 33 is, for example, a central processing unit (CPU) not shown.
- the control unit 33 implements various functions related to the server 3 by reading predetermined programs stored in the storage unit 32. That is, information processing by software stored in the storage unit 32 is specifically implemented by the control unit 33, which is an example of hardware, and can be executed as each functional unit included in the control unit 33. Regarding these, 2. Further details are provided in Sec.
- the control section 33 is not limited to a single control section, and may be implemented so as to have a plurality of control sections 33 for each function. It may also be a combination thereof.
- FIG. 4 is a block diagram showing the hardware configuration of the second user terminal 4.
- the second user terminal 4 includes a communication section 41, a storage section 42, a control section 43, a display section 44, an input section 45, and an audio output section 46, and these components are connected to the second user terminal 4. It is electrically connected within the user terminal 4 via a communication bus 40 .
- the details of the communication unit 41, storage unit 42, control unit 43, display unit 44, input unit 45, and audio output unit 46 are as described above in the first user terminal 2. Since they are substantially the same as the display section 24, input section 25, and audio output section 26, their explanation will be omitted.
- FIG. 5 is a block diagram showing the functions realized by the control unit 33 and the like in the server 3.
- the server 3, which is an example of the information processing system 1, includes a video reception section 331 and a display control section 332.
- FIG. 5 shows an embodiment in which the server 3 includes a search request reception section 333, a search execution section 334, a payment execution section 335, and a storage management section 336 as functions provided in the server 3.
- the video reception unit 331 is configured to be able to execute the video reception step. In this video reception step, a surgical operation video is received from the first user. This will be explained in more detail later.
- the display control unit 332 is configured to be able to execute display control steps. In this display control step, various display information is generated to control display contents that can be viewed by the user.
- the display information may be visual information itself such as a screen, image, icon, text, etc. that is generated in a form that is visible to the user, or, for example, visual information such as a screen, image, icon, text, etc. on various terminals. It may also be rendering information for display.
- the display control unit 332 displays the operation video as a shared video for the first user and a second user who is different from the first user.
- the shared video is configured to allow the first user and/or the second user to write an annotation, and on the screen that displays the shared video, the first user and/or The second user is configured to be able to add a text comment. Details of this display content will be explained later.
- the search request receiving unit 333 is configured to be able to execute a search request receiving step.
- a search request for a user who shares the operation video is received from the first user.
- the search execution unit 334 is configured to be able to execute a search execution step.
- a search for users who share the operation video is executed based on the above-mentioned search request. Information processing related to these searches will be explained later.
- the payment execution unit 335 is configured to be able to execute the payment execution step.
- the first user pays the second user compensation for writing an annotation and/or adding a text comment by the second user.
- the storage management unit 336 is configured to be able to execute storage management steps. In this storage management step, various information related to the information processing system 1 of this embodiment is configured to be stored and managed. Typically, the storage management unit 336 is configured to store received videos, annotations, comments, etc. in a storage area. This storage area is exemplified by the storage unit 32 of the server 3, but may also be the storage unit 22 provided in the first user terminal 2 or the storage unit 42 provided in the second user terminal 4. . That is, according to the function of the storage management unit 336, various information related to education can be configured to be viewable not only by the first user but also by the second user. Note that this storage area does not necessarily have to be within the information processing system 1, and the storage management unit 336 can also manage to store various information in an external storage device or the like.
- the information processing system 1 of this embodiment is used to support medical education by receiving surgical operation videos from the first user.
- the operation video of a surgery to which this information processing system 1 is applied may be a video related to surgery, ophthalmology, dentistry, urology, otorhinolaryngology, or obstetrics
- the surgical target may be a human. It may also be an animal other than a human. This embodiment will be described based on a case where the surgical operation video is a surgical operation video.
- FIG. 6 is an activity diagram showing the flow of information processing according to this embodiment. Specifically, in the information processing method of this embodiment, at least a video reception step and a display control step are executed. The details of the processing of each activity will be described below.
- the video reception unit 331 receives a surgery operation video from the first user (activity A101).
- This operation video is typically a video of the surgery performed by the first user, and there are no particular restrictions on the file extension or the like.
- Acceptance of this video can be achieved, for example, by the first user logging into a predetermined web page and uploading the video file.
- the display control unit 332 displays this operation video so that the first user and a second user different from the first user can view it as a shared video (activity A102).
- the shared video is configured to allow the first user and/or the second user to write an annotation, and on the screen that displays the shared video, the first user and/or The second user is configured to be able to add a text comment.
- FIG. 7 is a schematic diagram for explaining a shared video.
- a video MV1 shared video
- an input form IF1 is provided in which a text comment can be posted.
- the shared video is configured to allow the first user and/or the second user to write annotations. That is, the video MV1 is a recording of the operation performed on the organ OG1, and the video MV1 is configured such that an object OBJ1 can be written as an annotation. Notes can be made regarding operations, etc.
- the object OBJ1 that functions as this annotation may be any object that is displayed using a known method. Typically, this object OBJ1 may be provided as a handwritten object using a pen tablet device or the like, or may be based on a stamp function prepared in advance.
- a text comment of the first user and/or the second user can be attached to this annotation.
- the text comment associated with this annotation (the comment ⁇ in this direction'' in Figure 7) is , can be posted (assigned) via the input form IF1.
- a text comment can be added by inputting a predetermined text into the input form IF1 and then pressing the button BT2 labeled "ENTER" by a click operation or the like.
- the comment written in the input form IF1 and the annotation (object OBJ1) before being saved are deleted.
- the poster of this comment may be displayed.
- the annotation and text comment may be displayed in association with the time information of the operation video.
- the configuration may be such that, for example, it is possible to move to a predetermined time in the video based on the content of the annotation or comment. In other words, referring to FIG. 7, by clicking on the comment section "Pinch further down," you can jump to the "16 minutes 56 seconds" portion of the video.
- a comment made by a preceptor named "XY" is described, but a reply to this comment is sent to another user (a first user or a second user different from XY).
- the reply to this comment may be a text reply.
- the comment field on display screen D may be provided with a "like" button, a button for various reactions, a button for displaying a stamp, etc., and the above-mentioned reply can be displayed by pressing this button. It may be something that is done.
- the display screen D in FIG. 7 is provided with an object OBJ2 that can control playback/stop of a moving image, an object OBJ3 that can be repeatedly played, and an object OBJ4 that can adjust the volume.
- object OBJ5 can be set to change the color when drawing the annotation (OBJ1), and it can be set to change the line width when drawing the annotation (OBJ1).
- object OBJ6 is provided.
- object OBJ7 that can display the video MV1 in full screen is provided. Note that the functions that can be provided on the display screen D are not limited to these, and may be added or deleted as necessary.
- the storage management unit 336 stores the displayed shared videos, annotations, comments, etc. as data in a predetermined storage area (activity A103).
- a search process for a second user who provides guidance to the first user may be performed.
- a search process may be performed after the video reception unit 331 of the server 3 receives the operation video as shown in the activity diagram of FIG. 6, but is not necessarily limited to this.
- This search process may be performed before accepting the operation video.
- the search request receiving unit 333 receives a search request from the first user for a user who shares the operation video (activity A104). Furthermore, the search execution unit 334 executes a search for users who share the operation video based on the search request (activity A105). In addition to this, this search process is accomplished by specifying the second user who shares the operation video (activity A106).
- This search request may be based on various information, for example, a first user searches for a predetermined second user based on field of expertise, age, years of experience, affiliation, location, etc. There may be. Furthermore, the second user is identified by identifying the user who requests to share the video from among the second user candidates extracted as a result of the search.
- search execution unit 334 may be configured to identify the part where the operation is being performed from the contents of the operation video and search for a user who has expertise in the part.
- the video posted by the first user includes images of body parts such as the organ OG1.
- the search execution unit 334 of the server 3 receives a search request from the first user, the search execution unit 334 determines the region captured in the video and searches for a user who has expertise in surgery for this region. may be configured.
- expertise in a region refers to, for example, having specialized knowledge regarding a predetermined region, and typically refers to having expertise according to the classification of clinical medicine.
- the degree of expertise may be determined based on the number of submitted papers, the position at the hospital to which the person belongs, etc.
- FIG. 8 is an example of a screen displayed on the first user terminal 2 that displays search results.
- the search execution unit 334 specifies that the organ OG1 captured in the operation video is the pancreas, and based on this specific content, searches a second doctor who has expertise in the field of pancreas. It may be configured to be presented as a candidate to the user.
- the first user who comes into contact with such a display screen D identifies the second user who will share the video by pressing at least one (or more than one) of the buttons BT11, BT21, and BT31. be able to.
- buttons BT12, BT22, and BT32 are pressed, reviews from contributors of other operation videos are displayed. In this way, in the information processing method of this embodiment, the evaluation regarding the second user may be displayed so that the first user can view it.
- compensation may be paid to the second user with whom the video is shared.
- the second user with whom the video is shared may not be paid (free of charge).
- the information processing method of this embodiment is one in which a consideration is paid to the second user with whom the video is shared
- the information processing system of this embodiment as shown in the activity diagram of FIG. 1, payment for the second user may be executed.
- the payment execution unit 335 of the server 3 executes a process in which the first user pays the second user compensation for writing an annotation and/or adding a text comment by the second user.
- This consideration may be appropriately set between the first user and the second user.
- the amount of consideration may depend on the number and quality of annotations and comments.
- the compensation may be set higher depending on the degree of expertise. It should be noted that, together with the search results as shown in FIG. 8 described above, the compensation for requesting each candidate of the second user to share or add a comment may be clearly displayed.
- the annotation and the text comment linked to this annotation are visible to the first user, so that the first user receiving the education can easily understand the information processing method. can help. Therefore, medical education can be provided efficiently.
- the first user searches for a candidate for the second user, but conversely, the second user searches for the first user that he or she wants to instruct.
- the first user can set a tag associated with the operation, and the second user can search for the first user by searching for this tag.
- the server 3 performed various storage and control operations, but instead of the server 3, a plurality of external devices may be used. That is, using blockchain technology or the like, information regarding the attendance history may be distributed and stored in a plurality of external devices.
- An information processing system that supports medical education, including a control unit, and the control unit is configured to execute the following steps, and in the video reception step, a video reception step is performed by receiving a video request from a first user.
- the operation video is received, and in the display control step, the operation video is displayed as a shared video for the first user and a second user different from the first user, and here, the operation video is
- the shared video is configured to allow the first user and/or the second user to write an annotation, and on the screen that displays the shared video, the first user and/or the annotation associated with the annotation are displayed.
- An information processing system configured to be able to add/or a text comment of the second user.
- the search request receiving step a search request of a user who shares the operation video is received from the first user, and in the search execution step, the search request is An information processing system that searches for the user who shares the operation video based on the information processing system.
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Medical Treatment And Welfare Office Work (AREA)
Abstract
[Problem] The present invention addresses the problem of providing an information processing system, etc., with which it is possible to efficiently perform healthcare education. [Solution] According to one aspect of the present invention, provided is an information processing system for assisting in healthcare education. This information processing system comprises a control unit. The control unit is configured to execute each of the following steps. In a moving-image acceptance step, an operation moving image of surgery is accepted from a first user. In a display control step, the operation moving image is visibly displayed as a shared moving image to the first user and a second user different from the first user. The shared moving image is configured to enable an annotation to be written by the first user and/or the second user, and the screen on which the shared moving image is displayed is configured to enable a text comment of the first user and/or the second user associated with the annotation to be added.
Description
本発明は、情報処理システム、情報処理方法及びプログラムに関する。
The present invention relates to an information processing system, an information processing method, and a program.
近年、医療技術の発展に伴い、手術の手法も複雑化してきている。これに伴い、手術を支援するシステムも開発されてきており、たとえば、特許文献1には、術者同士のコミュニケーションの向上を課題とした技術が開示されている。
In recent years, with the development of medical technology, surgical techniques have become more complex. Along with this, systems to support surgery have also been developed, and for example, Patent Document 1 discloses a technique aimed at improving communication between surgeons.
特許文献1に開示された技術においては、動画の特定部位へのアノテーションが書き込み可能に構成されている。しかしながら、経験の浅いユーザ(医師)にとって、指導医の考えが明確に伝わらない場合があることが分かってきた。
The technology disclosed in Patent Document 1 is configured so that annotations can be written to specific parts of a video. However, it has been found that inexperienced users (physicians) may not be able to understand the instructor's ideas clearly.
本発明では上記事情に鑑み、医療の教育を効率的に行うことのできる情報処理システム等を提供することとした。
In view of the above circumstances, the present invention aims to provide an information processing system etc. that can efficiently provide medical education.
本発明の一態様によれば、医療の教育を支援する情報処理システムが提供される。この情報処理システムは、制御部を備える。制御部は、次の各ステップを実行するように構成される。動画受付ステップでは、第1のユーザから手術のオペレーション動画を受け付ける。表示制御ステップでは、オペレーション動画を、第1のユーザと、第1のユーザとは異なる第2のユーザとに、共有動画として視認可能に表示させる。ここで、共有動画は、第1のユーザ及び/又は第2のユーザによるアノテーションの書き込みが可能に構成され、かつ、共有動画を表示させる画面では、アノテーションに紐付く、第1のユーザ及び/又は第2のユーザのテキストコメントを付与可能に構成される。
According to one aspect of the present invention, an information processing system that supports medical education is provided. This information processing system includes a control section. The control unit is configured to perform the following steps. In the video reception step, a surgical operation video is received from the first user. In the display control step, the operation video is displayed as a shared video so that the first user and a second user different from the first user can view it. Here, the shared video is configured to allow the first user and/or the second user to write an annotation, and on the screen that displays the shared video, the first user and/or The second user is configured to be able to add a text comment.
上記態様によれば、医療の教育を効率的に行うことのできる情報処理システム等を提供することができる。
According to the above aspect, it is possible to provide an information processing system and the like that can efficiently provide medical education.
以下、図面を用いて本発明の実施形態について説明する。以下に示す実施形態中で示した各種特徴事項は、互いに組み合わせ可能である。
Hereinafter, embodiments of the present invention will be described using the drawings. Various features shown in the embodiments described below can be combined with each other.
すなわち、本実施形態にかかる情報処理システムは、以下に示すものである。
医療の教育を支援する情報処理システムであって、
制御部を備え、
前記制御部は、次の各ステップを実行するように構成され、
動画受付ステップでは、第1のユーザから手術のオペレーション動画を受け付け、
表示制御ステップでは、前記オペレーション動画を、前記第1のユーザと、前記第1のユーザとは異なる第2のユーザとに、共有動画として視認可能に表示させ、
ここで、前記共有動画は、前記第1のユーザ及び/又は前記第2のユーザによるアノテーションの書き込みが可能に構成され、かつ、
前記共有動画を表示させる画面では、前記アノテーションに紐付く、前記第1のユーザ及び/又は前記第2のユーザのテキストコメントを付与可能に構成される、情報処理システム。 That is, the information processing system according to this embodiment is as shown below.
An information processing system that supports medical education,
Equipped with a control unit,
The control unit is configured to execute the following steps,
In the video reception step, a surgical operation video is accepted from the first user,
In the display control step, the operation video is displayed so as to be visible to the first user and a second user different from the first user as a shared video;
Here, the shared video is configured to allow the first user and/or the second user to write an annotation, and
The information processing system is configured to be able to add a text comment of the first user and/or the second user that is linked to the annotation on a screen on which the shared video is displayed.
医療の教育を支援する情報処理システムであって、
制御部を備え、
前記制御部は、次の各ステップを実行するように構成され、
動画受付ステップでは、第1のユーザから手術のオペレーション動画を受け付け、
表示制御ステップでは、前記オペレーション動画を、前記第1のユーザと、前記第1のユーザとは異なる第2のユーザとに、共有動画として視認可能に表示させ、
ここで、前記共有動画は、前記第1のユーザ及び/又は前記第2のユーザによるアノテーションの書き込みが可能に構成され、かつ、
前記共有動画を表示させる画面では、前記アノテーションに紐付く、前記第1のユーザ及び/又は前記第2のユーザのテキストコメントを付与可能に構成される、情報処理システム。 That is, the information processing system according to this embodiment is as shown below.
An information processing system that supports medical education,
Equipped with a control unit,
The control unit is configured to execute the following steps,
In the video reception step, a surgical operation video is accepted from the first user,
In the display control step, the operation video is displayed so as to be visible to the first user and a second user different from the first user as a shared video;
Here, the shared video is configured to allow the first user and/or the second user to write an annotation, and
The information processing system is configured to be able to add a text comment of the first user and/or the second user that is linked to the annotation on a screen on which the shared video is displayed.
ところで、本実施形態に登場するソフトウェアを実現するためのプログラムは、コンピュータが読み取り可能な非一時的な記録媒体(Non-Transitory Computer-Readable Medium)として提供されてもよいし、外部のサーバからダウンロード可能に提供されてもよいし、外部のコンピュータで当該プログラムを起動させてクライアント端末でその機能を実現(いわゆるクラウドコンピューティング)するように提供されてもよい。
By the way, the program for realizing the software appearing in this embodiment may be provided as a non-transitory computer-readable medium, or may be downloaded from an external server. The program may be provided in a manner that allows the program to be started on an external computer and the function thereof is realized on the client terminal (so-called cloud computing).
また、本実施形態において「部」とは、例えば、広義の回路によって実施されるハードウェア資源と、これらのハードウェア資源によって具体的に実現されうるソフトウェアの情報処理とを合わせたものも含みうる。また、本実施形態においては様々な情報を取り扱うが、これら情報は、例えば電圧・電流を表す信号値の物理的な値、0又は1で構成される2進数のビット集合体としての信号値の高低、又は量子的な重ね合わせ(いわゆる量子ビット)によって表され、広義の回路上で通信・演算が実行されうる。
Furthermore, in this embodiment, the term "unit" may include, for example, a combination of hardware resources implemented by circuits in a broad sense and software information processing that can be concretely implemented by these hardware resources. . In addition, various types of information are handled in this embodiment, and these information include, for example, the physical value of a signal value representing voltage and current, and the signal value as a binary bit collection consisting of 0 or 1. It is expressed by high and low levels or quantum superposition (so-called quantum bits), and communication and calculations can be performed on circuits in a broad sense.
また、広義の回路とは、回路(Circuit)、回路類(Circuitry)、プロセッサ(Processor)、及びメモリ(Memory)等を少なくとも適当に組み合わせることによって実現される回路である。すなわち、特定用途向け集積回路(Application Specific Integrated Circuit:ASIC)、プログラマブル論理デバイス(例えば、単純プログラマブル論理デバイス(Simple Programmable Logic Device:SPLD)、複合プログラマブル論理デバイス(Complex Programmable Logic Device:CPLD)、及びフィールドプログラマブルゲートアレイ(Field Programmable Gate Array:FPGA))等を含むものである。
Further, a circuit in a broad sense is a circuit realized by at least appropriately combining a circuit, a circuit, a processor, a memory, and the like. In other words, Application Specific Integrated Circuit (ASIC), programmable logic device (for example, Simple Programmable Logic Device (SPLD)), complex programmable logic Device (Complex Programmable Logic Device: CPLD) and field This includes a field programmable gate array (FPGA) and the like.
1.ハードウェア構成
本節では、本実施形態のハードウェア構成について説明する。 1. Hardware Configuration This section describes the hardware configuration of this embodiment.
本節では、本実施形態のハードウェア構成について説明する。 1. Hardware Configuration This section describes the hardware configuration of this embodiment.
1.1 情報処理システム1
図1は、本実施形態に係る情報処理システム1を表す構成図である。本情報処理システム1は、医療の教育を支援する情報処理システムである。情報処理システム1は第1のユーザ端末2と、サーバ3と、第2のユーザ端末4とを備え、これらがネットワーク5を通じて接続されている。これらの構成要素についてさらに説明する。なお、情報処理システム1に例示されるシステムとは、1つ又はそれ以上の装置又は構成要素からなるものである。したがって、サーバ3単体であってもシステムの一例となる。 1.1 Information processing system 1
FIG. 1 is a configuration diagram showing an information processing system 1 according to the present embodiment. The information processing system 1 is an information processing system that supports medical education. The information processing system 1 includes a first user terminal 2, a server 3, and a second user terminal 4, which are connected through a network 5. These components will be further explained. Note that a system exemplified by the information processing system 1 is composed of one or more devices or components. Therefore, even the server 3 alone is an example of a system.
図1は、本実施形態に係る情報処理システム1を表す構成図である。本情報処理システム1は、医療の教育を支援する情報処理システムである。情報処理システム1は第1のユーザ端末2と、サーバ3と、第2のユーザ端末4とを備え、これらがネットワーク5を通じて接続されている。これらの構成要素についてさらに説明する。なお、情報処理システム1に例示されるシステムとは、1つ又はそれ以上の装置又は構成要素からなるものである。したがって、サーバ3単体であってもシステムの一例となる。 1.1 Information processing system 1
FIG. 1 is a configuration diagram showing an information processing system 1 according to the present embodiment. The information processing system 1 is an information processing system that supports medical education. The information processing system 1 includes a first user terminal 2, a server 3, and a second user terminal 4, which are connected through a network 5. These components will be further explained. Note that a system exemplified by the information processing system 1 is composed of one or more devices or components. Therefore, even the server 3 alone is an example of a system.
1.2 第1のユーザ端末2
第1のユーザ端末2は、典型的にはオペレーションの指導を受けるユーザの所有する端末である。図2は、第1のユーザ端末2のハードウェア構成を示すブロック図である。
第1のユーザ端末2は、通信部21と、記憶部22と、制御部23と、表示部24と、入力部25と、音声出力部26とを有し、これらの構成要素が第1のユーザ端末2の内部において通信バス20を介して電気的に接続されている。通信部21、記憶部22及び制御部23の説明は、後述の、サーバ3における通信部31、記憶部32及び制御部33と略同様のため省略する。 1.2 First user terminal 2
The first user terminal 2 is typically a terminal owned by a user receiving operational guidance. FIG. 2 is a block diagram showing the hardware configuration of the first user terminal 2. As shown in FIG.
The first user terminal 2 has a communication section 21, a storage section 22, a control section 23, a display section 24, an input section 25, and an audio output section 26, and these components are connected to the first user terminal 2. It is electrically connected within the user terminal 2 via a communication bus 20 . Descriptions of the communication unit 21, storage unit 22, and control unit 23 will be omitted because they are substantially the same as the communication unit 31, storage unit 32, and control unit 33 in the server 3, which will be described later.
第1のユーザ端末2は、典型的にはオペレーションの指導を受けるユーザの所有する端末である。図2は、第1のユーザ端末2のハードウェア構成を示すブロック図である。
第1のユーザ端末2は、通信部21と、記憶部22と、制御部23と、表示部24と、入力部25と、音声出力部26とを有し、これらの構成要素が第1のユーザ端末2の内部において通信バス20を介して電気的に接続されている。通信部21、記憶部22及び制御部23の説明は、後述の、サーバ3における通信部31、記憶部32及び制御部33と略同様のため省略する。 1.2 First user terminal 2
The first user terminal 2 is typically a terminal owned by a user receiving operational guidance. FIG. 2 is a block diagram showing the hardware configuration of the first user terminal 2. As shown in FIG.
The first user terminal 2 has a communication section 21, a storage section 22, a control section 23, a display section 24, an input section 25, and an audio output section 26, and these components are connected to the first user terminal 2. It is electrically connected within the user terminal 2 via a communication bus 20 . Descriptions of the communication unit 21, storage unit 22, and control unit 23 will be omitted because they are substantially the same as the communication unit 31, storage unit 32, and control unit 33 in the server 3, which will be described later.
表示部24は、例えば、第1のユーザ端末2の筐体に含まれるものであってもよいし、外付けされるものであってもよい。表示部24は、ユーザが操作可能なグラフィカルユーザインターフェース(Graphical User Interface:GUI)の画面を表示する。これは例えば、CRTディスプレイ、液晶ディスプレイ、有機ELディスプレイ及びプラズマディスプレイ等の表示デバイスを、第1のユーザ端末2の種類に応じて使い分けて実施することが好ましい。ここでは、表示部24は、第1のユーザ端末2の筐体に含まれるものとして説明する。
For example, the display unit 24 may be included in the casing of the first user terminal 2, or may be attached externally. The display unit 24 displays a screen of a graphical user interface (GUI) that can be operated by the user. This is preferably implemented by using display devices such as a CRT display, a liquid crystal display, an organic EL display, and a plasma display depending on the type of the first user terminal 2, for example. Here, the display unit 24 will be described as being included in the casing of the first user terminal 2.
入力部25は、第1のユーザ端末2の筐体に含まれるものであってもよいし、外付けされるものであってもよい。例えば、入力部25は、表示部24と一体となってタッチパネルとして実施されてもよい。タッチパネルであれば、ユーザは、タップ操作、スワイプ操作等を入力することができる。もちろん、タッチパネルに代えて、スイッチボタン、マウス、QWERTYキーボード等を採用してもよい。すなわち、入力部25がユーザによってなされた操作入力を受け付ける。当該入力が命令信号として、通信バス20を介して制御部23に転送され、制御部23が必要に応じて所定の制御や演算を実行しうる。
The input unit 25 may be included in the casing of the first user terminal 2, or may be externally attached. For example, the input unit 25 may be integrated with the display unit 24 and implemented as a touch panel. With a touch panel, the user can input tap operations, swipe operations, and the like. Of course, a switch button, a mouse, a QWERTY keyboard, etc. may be used instead of the touch panel. That is, the input unit 25 receives operation inputs made by the user. The input is transferred as a command signal to the control unit 23 via the communication bus 20, and the control unit 23 can perform predetermined control or calculation as necessary.
音声出力部26は、例えば、第1のユーザ端末2の筐体に含まれるものであってもよいし、外付けされるものであってもよい。音声出力部26は、ユーザが認識可能な音声を出力する。音声出力部26は、無指向性スピーカであってもよいし、指向性スピーカであってもよいし、これらの両方を有していてもよい。ここでは、音声出力部26は、第1のユーザ端末2の筐体に含まれるものとして説明する。
For example, the audio output unit 26 may be included in the housing of the first user terminal 2, or may be externally attached. The audio output unit 26 outputs audio that can be recognized by the user. The audio output unit 26 may be a non-directional speaker, a directional speaker, or both. Here, the audio output unit 26 will be described as being included in the casing of the first user terminal 2.
1.3 サーバ3
図3は、サーバ3のハードウェア構成を示すブロック図である。サーバ3は、通信部31と、記憶部32と、制御部33とを有し、これらの構成要素がサーバ3の内部において通信バス30を介して電気的に接続されている。各構成要素についてさらに説明する。 1.3 Server 3
FIG. 3 is a block diagram showing the hardware configuration of the server 3. As shown in FIG. The server 3 includes a communication section 31, a storage section 32, and a control section 33, and these components are electrically connected via a communication bus 30 inside the server 3. Each component will be further explained.
図3は、サーバ3のハードウェア構成を示すブロック図である。サーバ3は、通信部31と、記憶部32と、制御部33とを有し、これらの構成要素がサーバ3の内部において通信バス30を介して電気的に接続されている。各構成要素についてさらに説明する。 1.3 Server 3
FIG. 3 is a block diagram showing the hardware configuration of the server 3. As shown in FIG. The server 3 includes a communication section 31, a storage section 32, and a control section 33, and these components are electrically connected via a communication bus 30 inside the server 3. Each component will be further explained.
通信部31は、USB、IEEE1394、Thunderbolt(登録商標)、有線LANネットワーク通信等といった有線型の通信手段が好ましいものの、無線LANネットワーク通信、3G/LTE/5G等のモバイル通信、Bluetooth(登録商標)通信等を必要に応じて含めてもよい。すなわち、これら複数の通信手段の集合として実施することがより好ましい。すなわち、サーバ3は、通信部31を介して、第1のユーザ端末2や第2のユーザ端末4とネットワーク5を介して種々の情報を通信する。
Although the communication unit 31 is preferably a wired communication means such as USB, IEEE1394, Thunderbolt (registered trademark), wired LAN network communication, etc., it is also suitable for wireless LAN network communication, mobile communication such as 3G/LTE/5G, Bluetooth (registered trademark) Communication etc. may be included as necessary. That is, it is more preferable to implement it as a set of these plurality of communication means. That is, the server 3 communicates various information with the first user terminal 2 and the second user terminal 4 via the network 5 via the communication unit 31 .
記憶部32は、前述の記載により定義される様々な情報を記憶する。これは、例えば、制御部33によって実行されるサーバ3に係る種々のプログラム等を記憶するソリッドステートドライブ(Solid State Drive:SSD)等のストレージデバイスとして、あるいは、プログラムの演算に係る一時的に必要な情報(引数、配列等)を記憶するランダムアクセスメモリ(Random Access Memory:RAM)等のメモリとして実施されうる。また、これらの組合せであってもよい。特に、記憶部32は、制御部33によって実行されるサーバ3に係る種々のプログラム等を記憶している。
The storage unit 32 stores various information defined by the above description. This may be used, for example, as a storage device such as a solid state drive (SSD) that stores various programs related to the server 3 executed by the control unit 33, or as a temporary storage device related to program calculations. It can be implemented as a memory such as a random access memory (RAM) that stores information (arguments, arrays, etc.). Alternatively, a combination of these may be used. In particular, the storage unit 32 stores various programs related to the server 3 that are executed by the control unit 33.
制御部33は、サーバ3に関連する全体動作の処理・制御を行う。制御部33は、例えば不図示の中央処理装置(Central Processing Unit:CPU)である。制御部33は、記憶部32に記憶された所定のプログラムを読み出すことによって、サーバ3に係る種々の機能を実現する。すなわち、記憶部32に記憶されているソフトウェアによる情報処理が、ハードウェアの一例である制御部33によって具体的に実現されることで、制御部33に含まれる各機能部として実行されうる。これらについては、2.節においてさらに詳述する。なお、制御部33は単一であることに限定されず、機能ごとに複数の制御部33を有するように実施してもよい。またそれらの組合せであってもよい。
The control unit 33 processes and controls the overall operation related to the server 3. The control unit 33 is, for example, a central processing unit (CPU) not shown. The control unit 33 implements various functions related to the server 3 by reading predetermined programs stored in the storage unit 32. That is, information processing by software stored in the storage unit 32 is specifically implemented by the control unit 33, which is an example of hardware, and can be executed as each functional unit included in the control unit 33. Regarding these, 2. Further details are provided in Sec. Note that the control section 33 is not limited to a single control section, and may be implemented so as to have a plurality of control sections 33 for each function. It may also be a combination thereof.
1.4 第2のユーザ端末4
第2のユーザ端末4は、典型的には第1のユーザを指導する者(一例として「指導医」と称してもよい)の所有する端末である。図4は、第2のユーザ端末4のハードウェア構成を示すブロック図である。
第2のユーザ端末4は、通信部41と、記憶部42と、制御部43と、表示部44と、入力部45と、音声出力部46とを有し、これらの構成要素が第2のユーザ端末4の内部において通信バス40を介して電気的に接続されている。通信部41、記憶部42、制御部43、表示部44、入力部45および音声出力部46の詳細は、前述の、第1のユーザ端末2における通信部21、記憶部22、制御部23、表示部24、入力部25および音声出力部26と略同様のため省略する。 1.4 Second user terminal 4
The second user terminal 4 is typically a terminal owned by a person who instructs the first user (for example, may be referred to as an "instructor"). FIG. 4 is a block diagram showing the hardware configuration of the second user terminal 4. As shown in FIG.
The second user terminal 4 includes a communication section 41, a storage section 42, a control section 43, a display section 44, an input section 45, and an audio output section 46, and these components are connected to the second user terminal 4. It is electrically connected within the user terminal 4 via a communication bus 40 . The details of the communication unit 41, storage unit 42, control unit 43, display unit 44, input unit 45, and audio output unit 46 are as described above in the first user terminal 2. Since they are substantially the same as the display section 24, input section 25, and audio output section 26, their explanation will be omitted.
第2のユーザ端末4は、典型的には第1のユーザを指導する者(一例として「指導医」と称してもよい)の所有する端末である。図4は、第2のユーザ端末4のハードウェア構成を示すブロック図である。
第2のユーザ端末4は、通信部41と、記憶部42と、制御部43と、表示部44と、入力部45と、音声出力部46とを有し、これらの構成要素が第2のユーザ端末4の内部において通信バス40を介して電気的に接続されている。通信部41、記憶部42、制御部43、表示部44、入力部45および音声出力部46の詳細は、前述の、第1のユーザ端末2における通信部21、記憶部22、制御部23、表示部24、入力部25および音声出力部26と略同様のため省略する。 1.4 Second user terminal 4
The second user terminal 4 is typically a terminal owned by a person who instructs the first user (for example, may be referred to as an "instructor"). FIG. 4 is a block diagram showing the hardware configuration of the second user terminal 4. As shown in FIG.
The second user terminal 4 includes a communication section 41, a storage section 42, a control section 43, a display section 44, an input section 45, and an audio output section 46, and these components are connected to the second user terminal 4. It is electrically connected within the user terminal 4 via a communication bus 40 . The details of the communication unit 41, storage unit 42, control unit 43, display unit 44, input unit 45, and audio output unit 46 are as described above in the first user terminal 2. Since they are substantially the same as the display section 24, input section 25, and audio output section 26, their explanation will be omitted.
2.機能構成
本節では、本実施形態の機能構成について説明する。前述の通り、記憶部32に記憶されているソフトウェアによる情報処理がハードウェアの一例である制御部33によって具体的に実現されることで、制御部33に含まれる各機能部として実行されうる。 2. Functional Configuration This section describes the functional configuration of this embodiment. As described above, the information processing by the software stored in the storage unit 32 is specifically implemented by the control unit 33, which is an example of hardware, and can be executed as each functional unit included in the control unit 33.
本節では、本実施形態の機能構成について説明する。前述の通り、記憶部32に記憶されているソフトウェアによる情報処理がハードウェアの一例である制御部33によって具体的に実現されることで、制御部33に含まれる各機能部として実行されうる。 2. Functional Configuration This section describes the functional configuration of this embodiment. As described above, the information processing by the software stored in the storage unit 32 is specifically implemented by the control unit 33, which is an example of hardware, and can be executed as each functional unit included in the control unit 33.
図5は、サーバ3における制御部33等によって実現される機能を示すブロック図である。具体的には、情報処理システム1の一例であるサーバ3は、動画受付部331と、表示制御部332とを備えるものである。なお、図5にはサーバ3に備えられた機能として、検索要求受付部333と、検索実行部334と、決済実行部335と、記憶管理部336と、を備えた態様を示している。
FIG. 5 is a block diagram showing the functions realized by the control unit 33 and the like in the server 3. Specifically, the server 3, which is an example of the information processing system 1, includes a video reception section 331 and a display control section 332. Note that FIG. 5 shows an embodiment in which the server 3 includes a search request reception section 333, a search execution section 334, a payment execution section 335, and a storage management section 336 as functions provided in the server 3.
動画受付部331は、動画受付ステップを実行可能に構成される。この動画受付ステップでは、第1のユーザから手術のオペレーション動画を受け付ける。これについては後にさらに詳述する。
The video reception unit 331 is configured to be able to execute the video reception step. In this video reception step, a surgical operation video is received from the first user. This will be explained in more detail later.
表示制御部332は、表示制御ステップを実行可能に構成される。この表示制御ステップでは、種々の表示情報を生成して、ユーザが視認可能な表示内容を制御するように構成される。なお、表示情報とは、画面、画像、アイコン、テキスト等といった、ユーザが視認可能な態様で生成された視覚情報そのものでもよいし、例えば各種端末に画面、画像、アイコン、テキスト等の視覚情報を表示させるためのレンダリング情報であってもよい。なお、本実施形態の情報処理システム1においては、表示制御部332は、オペレーション動画を、第1のユーザと、第1のユーザとは異なる第2のユーザとに、共有動画として視認可能に表示させる。ここで、共有動画は、第1のユーザ及び/又は第2のユーザによるアノテーションの書き込みが可能に構成され、かつ、共有動画を表示させる画面では、アノテーションに紐付く、第1のユーザ及び/又は第2のユーザのテキストコメントを付与可能に構成される。
この表示内容の詳細については追って説明する。 The display control unit 332 is configured to be able to execute display control steps. In this display control step, various display information is generated to control display contents that can be viewed by the user. Note that the display information may be visual information itself such as a screen, image, icon, text, etc. that is generated in a form that is visible to the user, or, for example, visual information such as a screen, image, icon, text, etc. on various terminals. It may also be rendering information for display. Note that in the information processing system 1 of the present embodiment, the display control unit 332 displays the operation video as a shared video for the first user and a second user who is different from the first user. let Here, the shared video is configured to allow the first user and/or the second user to write an annotation, and on the screen that displays the shared video, the first user and/or The second user is configured to be able to add a text comment.
Details of this display content will be explained later.
この表示内容の詳細については追って説明する。 The display control unit 332 is configured to be able to execute display control steps. In this display control step, various display information is generated to control display contents that can be viewed by the user. Note that the display information may be visual information itself such as a screen, image, icon, text, etc. that is generated in a form that is visible to the user, or, for example, visual information such as a screen, image, icon, text, etc. on various terminals. It may also be rendering information for display. Note that in the information processing system 1 of the present embodiment, the display control unit 332 displays the operation video as a shared video for the first user and a second user who is different from the first user. let Here, the shared video is configured to allow the first user and/or the second user to write an annotation, and on the screen that displays the shared video, the first user and/or The second user is configured to be able to add a text comment.
Details of this display content will be explained later.
検索要求受付部333は、検索要求受付ステップを実行可能に構成される。この検索要求受付ステップでは、第1のユーザから、オペレーション動画を共有するユーザの検索要求を受け付ける。また、検索実行部334は、検索実行ステップを実行可能に構成される。この検索実行ステップでは、上述の検索要求に基づき、オペレーション動画を共有するユーザの検索を実行する。
これら検索に関する情報処理については追って説明することとする。 The search request receiving unit 333 is configured to be able to execute a search request receiving step. In this search request receiving step, a search request for a user who shares the operation video is received from the first user. Further, the search execution unit 334 is configured to be able to execute a search execution step. In this search execution step, a search for users who share the operation video is executed based on the above-mentioned search request.
Information processing related to these searches will be explained later.
これら検索に関する情報処理については追って説明することとする。 The search request receiving unit 333 is configured to be able to execute a search request receiving step. In this search request receiving step, a search request for a user who shares the operation video is received from the first user. Further, the search execution unit 334 is configured to be able to execute a search execution step. In this search execution step, a search for users who share the operation video is executed based on the above-mentioned search request.
Information processing related to these searches will be explained later.
決済実行部335は、決済実行ステップを実行可能に構成される。この決済実行ステップでは、第2のユーザのアノテーションの書き込み及び/又はテキストコメントの付与の対価を、第1のユーザから、第2のユーザに支払う処理を実行する。
The payment execution unit 335 is configured to be able to execute the payment execution step. In this payment execution step, the first user pays the second user compensation for writing an annotation and/or adding a text comment by the second user.
記憶管理部336は、記憶管理ステップを実行可能に構成される。この記憶管理ステップでは、本実施形態の情報処理システム1に係る種々の情報について記憶管理するように構成される。典型的には、記憶管理部336は、受け付けた動画や、アノテーション、コメント等を記憶領域に記憶させるように構成される。この記憶領域は、たとえばサーバ3の記憶部32が例示されるが、その他、第1のユーザ端末2に備えられる記憶部22、第2のユーザ端末4に備えられる記憶部42であってもよい。すなわち、この記憶管理部336の機能によれば、教育に関する種々の情報は、第1のユーザに加え、第2のユーザにも閲覧可能に構成することができる。なお、この記憶領域は必ずしも情報処理システム1のシステム内である必要はなく、記憶管理部336は、種々の情報を外部記憶装置などに記憶するように管理することもできる。
The storage management unit 336 is configured to be able to execute storage management steps. In this storage management step, various information related to the information processing system 1 of this embodiment is configured to be stored and managed. Typically, the storage management unit 336 is configured to store received videos, annotations, comments, etc. in a storage area. This storage area is exemplified by the storage unit 32 of the server 3, but may also be the storage unit 22 provided in the first user terminal 2 or the storage unit 42 provided in the second user terminal 4. . That is, according to the function of the storage management unit 336, various information related to education can be configured to be viewable not only by the first user but also by the second user. Note that this storage area does not necessarily have to be within the information processing system 1, and the storage management unit 336 can also manage to store various information in an external storage device or the like.
3.情報処理方法
本節では、前述した情報処理システム1の実行する情報処理方法の各ステップについてアクティビティ図を用いて説明する。なお、本実施形態の情報処理システム1は、上述のように、第1のユーザから手術のオペレーション動画を受け付けることで医療の教育を支援するために用いられるが、この医療は、手術のオペレーションが行われる領域であれば特に制限されない。
たとえば、この情報処理システム1の適用される手術のオペレーション動画は、外科、眼科、歯科、泌尿器科、耳鼻咽喉科、産科に関する領域に関する動画であってもよく、手術の対象はヒトであってもヒト以外の動物であってもよい。本実施形態では、手術のオペレーション動画が外科手術のオペレーション動画である場合に基づいて説明する。 3. Information Processing Method In this section, each step of the information processing method executed by the information processing system 1 described above will be explained using an activity diagram. Note that, as described above, the information processing system 1 of this embodiment is used to support medical education by receiving surgical operation videos from the first user. There is no particular restriction as long as it is performed in an area.
For example, the operation video of a surgery to which this information processing system 1 is applied may be a video related to surgery, ophthalmology, dentistry, urology, otorhinolaryngology, or obstetrics, and the surgical target may be a human. It may also be an animal other than a human. This embodiment will be described based on a case where the surgical operation video is a surgical operation video.
本節では、前述した情報処理システム1の実行する情報処理方法の各ステップについてアクティビティ図を用いて説明する。なお、本実施形態の情報処理システム1は、上述のように、第1のユーザから手術のオペレーション動画を受け付けることで医療の教育を支援するために用いられるが、この医療は、手術のオペレーションが行われる領域であれば特に制限されない。
たとえば、この情報処理システム1の適用される手術のオペレーション動画は、外科、眼科、歯科、泌尿器科、耳鼻咽喉科、産科に関する領域に関する動画であってもよく、手術の対象はヒトであってもヒト以外の動物であってもよい。本実施形態では、手術のオペレーション動画が外科手術のオペレーション動画である場合に基づいて説明する。 3. Information Processing Method In this section, each step of the information processing method executed by the information processing system 1 described above will be explained using an activity diagram. Note that, as described above, the information processing system 1 of this embodiment is used to support medical education by receiving surgical operation videos from the first user. There is no particular restriction as long as it is performed in an area.
For example, the operation video of a surgery to which this information processing system 1 is applied may be a video related to surgery, ophthalmology, dentistry, urology, otorhinolaryngology, or obstetrics, and the surgical target may be a human. It may also be an animal other than a human. This embodiment will be described based on a case where the surgical operation video is a surgical operation video.
図6は、本実施形態にかかる情報処理の流れを示すアクティビティ図である。具体的に、本実施形態の情報処理方法においては、少なくとも、動画受付ステップと、表示制御ステップとが実行される。以下、各アクティビティの処理の詳細について詳述する。
FIG. 6 is an activity diagram showing the flow of information processing according to this embodiment. Specifically, in the information processing method of this embodiment, at least a video reception step and a display control step are executed. The details of the processing of each activity will be described below.
まず、本実施形態の情報処理方法では、動画受付部331が、第1のユーザから手術のオペレーション動画を受け付ける(アクティビティA101)。このオペレーション動画は、典型的には、第1のユーザが行った手術を動画として撮影したものであり、とくにファイルの拡張子等が制限されるものではない。
First, in the information processing method of this embodiment, the video reception unit 331 receives a surgery operation video from the first user (activity A101). This operation video is typically a video of the surgery performed by the first user, and there are no particular restrictions on the file extension or the like.
この動画の受け付けは、例示的には第1のユーザが所定のwebページにログインし、動画ファイルをアップロードすることによって達成されうる。
Acceptance of this video can be achieved, for example, by the first user logging into a predetermined web page and uploading the video file.
続いて、表示制御部332は、このオペレーション動画を、第1のユーザと、第1のユーザとは異なる第2のユーザとに、共有動画として視認可能に表示させる(アクティビティA102)。ここで、共有動画は、第1のユーザ及び/又は第2のユーザによるアノテーションの書き込みが可能に構成され、かつ、共有動画を表示させる画面では、アノテーションに紐付く、第1のユーザ及び/又は第2のユーザのテキストコメントを付与可能に構成される。
Subsequently, the display control unit 332 displays this operation video so that the first user and a second user different from the first user can view it as a shared video (activity A102). Here, the shared video is configured to allow the first user and/or the second user to write an annotation, and on the screen that displays the shared video, the first user and/or The second user is configured to be able to add a text comment.
この共有動画について、図7を交えて説明する。図7は、共有動画を説明するための模式図である。この図7における表示画面Dには、動画MV1(共有動画)が表示されており、この動画MV1とは別に、テキストコメントの投稿を行うことのできる入力フォームIF1が設けられている。
This shared video will be explained with reference to FIG. FIG. 7 is a schematic diagram for explaining a shared video. A video MV1 (shared video) is displayed on the display screen D in FIG. 7, and in addition to the video MV1, an input form IF1 is provided in which a text comment can be posted.
前述の通り、共有動画は、第1のユーザ及び/又は第2のユーザによるアノテーションの書き込みが可能に構成されている。すなわち、動画MV1は、臓器OG1を手術する様子を録画したものであるが、この動画MV1には、アノテーションとして、オブジェクトOBJ1を書き込み可能に構成されており、これにより、医療器具MI1や医療器具MI2の操作等について注記を行うことができる。
このアノテーションとして機能するオブジェクトOBJ1は、公知の手法によって表示されるものであればよい。典型的には、このオブジェクトOBJ1は、ペンタブレットデバイス等を用いて手書きのオブジェクトとして設けられたものであってもよいし、事前に用意されたスタンプ機能に基づくものであってもよい。 As described above, the shared video is configured to allow the first user and/or the second user to write annotations. That is, the video MV1 is a recording of the operation performed on the organ OG1, and the video MV1 is configured such that an object OBJ1 can be written as an annotation. Notes can be made regarding operations, etc.
The object OBJ1 that functions as this annotation may be any object that is displayed using a known method. Typically, this object OBJ1 may be provided as a handwritten object using a pen tablet device or the like, or may be based on a stamp function prepared in advance.
このアノテーションとして機能するオブジェクトOBJ1は、公知の手法によって表示されるものであればよい。典型的には、このオブジェクトOBJ1は、ペンタブレットデバイス等を用いて手書きのオブジェクトとして設けられたものであってもよいし、事前に用意されたスタンプ機能に基づくものであってもよい。 As described above, the shared video is configured to allow the first user and/or the second user to write annotations. That is, the video MV1 is a recording of the operation performed on the organ OG1, and the video MV1 is configured such that an object OBJ1 can be written as an annotation. Notes can be made regarding operations, etc.
The object OBJ1 that functions as this annotation may be any object that is displayed using a known method. Typically, this object OBJ1 may be provided as a handwritten object using a pen tablet device or the like, or may be based on a stamp function prepared in advance.
なお、本実施形態においては、このアノテーションに紐付く、第1のユーザ及び/又は第2のユーザのテキストコメントを付与可能に構成されている。すなわち、この動画MV1の再生時刻「13分20秒」のところでオブジェクトOBJ1が設けられたものとして説明すると、このアノテーションに紐付くテキストコメント(図7でいえば「こっちの方向に」というコメント)を、入力フォームIF1を介して投稿(付与)することができる。具体的には、所定のテキストを入力フォームIF1に入力した後、「ENTER」と記載されたボタンBT2をクリック操作等で押下することにより、テキストコメントの付与を行うことができる。なお、「CANCEL」と記載されたボタンBT1を押下すると、入力フォームIF1に記載されたコメント及び保存前のアノテーション(オブジェクトOBJ1)を消去するように構成されている。
なお、図7に示す通り、このコメントの投稿者が表示されるように構成されていてもよい。また、図7に示す通り、このアノテーションおよびテキストコメントは、オペレーション動画の時間情報と紐付けられて表示されてもよい。このように動画の時間とアノテーション等が紐付けられている場合、たとえば、アノテーションやコメントの内容から、動画における所定の時間に移行できるように構成されてよい。すなわち図7で説明すれば「もっと下にピンチして」というコメント部分をクリックすることにより、動画の「16分56秒」の箇所に飛ぶことができる。 In addition, in this embodiment, it is configured such that a text comment of the first user and/or the second user can be attached to this annotation. In other words, assuming that object OBJ1 is provided at the playback time of ``13 minutes and 20 seconds'' in video MV1, the text comment associated with this annotation (the comment ``in this direction'' in Figure 7) is , can be posted (assigned) via the input form IF1. Specifically, a text comment can be added by inputting a predetermined text into the input form IF1 and then pressing the button BT2 labeled "ENTER" by a click operation or the like. Note that when the button BT1 written as "CANCEL" is pressed, the comment written in the input form IF1 and the annotation (object OBJ1) before being saved are deleted.
Note that, as shown in FIG. 7, the poster of this comment may be displayed. Further, as shown in FIG. 7, the annotation and text comment may be displayed in association with the time information of the operation video. When the time of the video and the annotation are linked in this way, the configuration may be such that, for example, it is possible to move to a predetermined time in the video based on the content of the annotation or comment. In other words, referring to FIG. 7, by clicking on the comment section "Pinch further down," you can jump to the "16 minutes 56 seconds" portion of the video.
なお、図7に示す通り、このコメントの投稿者が表示されるように構成されていてもよい。また、図7に示す通り、このアノテーションおよびテキストコメントは、オペレーション動画の時間情報と紐付けられて表示されてもよい。このように動画の時間とアノテーション等が紐付けられている場合、たとえば、アノテーションやコメントの内容から、動画における所定の時間に移行できるように構成されてよい。すなわち図7で説明すれば「もっと下にピンチして」というコメント部分をクリックすることにより、動画の「16分56秒」の箇所に飛ぶことができる。 In addition, in this embodiment, it is configured such that a text comment of the first user and/or the second user can be attached to this annotation. In other words, assuming that object OBJ1 is provided at the playback time of ``13 minutes and 20 seconds'' in video MV1, the text comment associated with this annotation (the comment ``in this direction'' in Figure 7) is , can be posted (assigned) via the input form IF1. Specifically, a text comment can be added by inputting a predetermined text into the input form IF1 and then pressing the button BT2 labeled "ENTER" by a click operation or the like. Note that when the button BT1 written as "CANCEL" is pressed, the comment written in the input form IF1 and the annotation (object OBJ1) before being saved are deleted.
Note that, as shown in FIG. 7, the poster of this comment may be displayed. Further, as shown in FIG. 7, the annotation and text comment may be displayed in association with the time information of the operation video. When the time of the video and the annotation are linked in this way, the configuration may be such that, for example, it is possible to move to a predetermined time in the video based on the content of the annotation or comment. In other words, referring to FIG. 7, by clicking on the comment section "Pinch further down," you can jump to the "16 minutes 56 seconds" portion of the video.
また、いったん付与されたコメントに対しては、別のユーザがコメントを書き足すこともできる。たとえば、図7には「XY」という指導医がコメントした内容が記載されているが、このコメントに対しての返信等を別のユーザ(第1のユーザや、XYとは異なる第2のユーザ)が行うことができる。なお、このコメントに対する返信はテキストによる返信であってもよい。一方で、表示画面D上のコメント欄に「いいね」ボタンや、各種リアクションをとるボタン、スタンプを表示させるボタン等が設けられてもよく、上述の返信は、このボタンを押下することによって表示されるものであってもよい。
Additionally, once a comment has been added, another user can add a comment. For example, in Figure 7, a comment made by a preceptor named "XY" is described, but a reply to this comment is sent to another user (a first user or a second user different from XY). ) can be done. Note that the reply to this comment may be a text reply. On the other hand, the comment field on display screen D may be provided with a "like" button, a button for various reactions, a button for displaying a stamp, etc., and the above-mentioned reply can be displayed by pressing this button. It may be something that is done.
その他、図7の表示画面Dには、動画の再生/停止が制御可能なオブジェクトOBJ2、繰り返し再生が可能なオブジェクトOBJ3、音量調節が可能なオブジェクトOBJ4が設けられている。また、図7の表示画面Dには、アノテーション(OBJ1)を描画する際の色を変更することが設定可能なオブジェクトOBJ5、アノテーション(OBJ1)を描画する際の線幅を変更することが設定可能なオブジェクトOBJ6が設けられている。さらに、動画MV1を全画面表示させることが可能なオブジェクトOBJ7が設けられている。
なお、表示画面Dに設けることのできる機能はこれには限られず、必要に応じて付加や削除が行われてもよい。 In addition, the display screen D in FIG. 7 is provided with an object OBJ2 that can control playback/stop of a moving image, an object OBJ3 that can be repeatedly played, and an object OBJ4 that can adjust the volume. In addition, on display screen D in Figure 7, object OBJ5 can be set to change the color when drawing the annotation (OBJ1), and it can be set to change the line width when drawing the annotation (OBJ1). An object OBJ6 is provided. Furthermore, an object OBJ7 that can display the video MV1 in full screen is provided.
Note that the functions that can be provided on the display screen D are not limited to these, and may be added or deleted as necessary.
なお、表示画面Dに設けることのできる機能はこれには限られず、必要に応じて付加や削除が行われてもよい。 In addition, the display screen D in FIG. 7 is provided with an object OBJ2 that can control playback/stop of a moving image, an object OBJ3 that can be repeatedly played, and an object OBJ4 that can adjust the volume. In addition, on display screen D in Figure 7, object OBJ5 can be set to change the color when drawing the annotation (OBJ1), and it can be set to change the line width when drawing the annotation (OBJ1). An object OBJ6 is provided. Furthermore, an object OBJ7 that can display the video MV1 in full screen is provided.
Note that the functions that can be provided on the display screen D are not limited to these, and may be added or deleted as necessary.
このようにして、表示した共有動画や、アノテーション、コメント等については記憶管理部336が所定の記憶領域に、データとして格納する(アクティビティA103)。
In this way, the storage management unit 336 stores the displayed shared videos, annotations, comments, etc. as data in a predetermined storage area (activity A103).
また、本実施形態の情報処理方法においては、第1のユーザの指導を行う第2のユーザの検索処理が行われてもよい。このような検索処理は、例示的には、図6のアクティビティ図に示すようにサーバ3の動画受付部331がオペレーション動画を受け付けた後に行われてもよいが、必ずしもこれには限定されず、オペレーション動画を受け付ける前に、この検索処理が行われてもよい。
Furthermore, in the information processing method of this embodiment, a search process for a second user who provides guidance to the first user may be performed. For example, such a search process may be performed after the video reception unit 331 of the server 3 receives the operation video as shown in the activity diagram of FIG. 6, but is not necessarily limited to this. This search process may be performed before accepting the operation video.
この検索処理においては、まず、検索要求受付部333が、第1のユーザから、オペレーション動画を共有するユーザの検索要求を受け付ける(アクティビティA104)。また、検索実行部334は、検索要求に基づき、オペレーション動画を共有するユーザの検索を実行する(アクティビティA105)。
これに加えて、オペレーション動画を共有する第2のユーザが特定されることでこの検索処理は達成されることとなる(アクティビティA106)。 In this search process, first, the search request receiving unit 333 receives a search request from the first user for a user who shares the operation video (activity A104). Furthermore, the search execution unit 334 executes a search for users who share the operation video based on the search request (activity A105).
In addition to this, this search process is accomplished by specifying the second user who shares the operation video (activity A106).
これに加えて、オペレーション動画を共有する第2のユーザが特定されることでこの検索処理は達成されることとなる(アクティビティA106)。 In this search process, first, the search request receiving unit 333 receives a search request from the first user for a user who shares the operation video (activity A104). Furthermore, the search execution unit 334 executes a search for users who share the operation video based on the search request (activity A105).
In addition to this, this search process is accomplished by specifying the second user who shares the operation video (activity A106).
この検索要求は、種々の情報に基づくものであってよく、たとえば、第1のユーザが、専門分野、年齢、経験年数、所属、所在地等に基づき、所定の第2のユーザを検索するものであってもよい。また、検索の結果、抽出される第2のユーザの候補者の中から、動画の共有を依頼するユーザを特定することにより、第2のユーザが特定されることとなる。
This search request may be based on various information, for example, a first user searches for a predetermined second user based on field of expertise, age, years of experience, affiliation, location, etc. There may be. Furthermore, the second user is identified by identifying the user who requests to share the video from among the second user candidates extracted as a result of the search.
なお、この検索処理は以下に示すようなものであってもよい。
すなわち、検索実行部334が、オペレーション動画の内容から、オペレーションが実施されている部位を特定し、部位に対する専門性を有するユーザを検索するように構成されてもよい。 Note that this search process may be as shown below.
That is, the search execution unit 334 may be configured to identify the part where the operation is being performed from the contents of the operation video and search for a user who has expertise in the part.
すなわち、検索実行部334が、オペレーション動画の内容から、オペレーションが実施されている部位を特定し、部位に対する専門性を有するユーザを検索するように構成されてもよい。 Note that this search process may be as shown below.
That is, the search execution unit 334 may be configured to identify the part where the operation is being performed from the contents of the operation video and search for a user who has expertise in the part.
図7で説明したように、第1のユーザによって投稿される動画には臓器OG1等の身体の部位が撮影されている。サーバ3の検索実行部334は、第1のユーザから検索要求があったときに、動画に撮影されている部位を判別し、この部位の手術に対して専門性を有するユーザを検索するように構成されてもよい。
As explained in FIG. 7, the video posted by the first user includes images of body parts such as the organ OG1. When the search execution unit 334 of the server 3 receives a search request from the first user, the search execution unit 334 determines the region captured in the video and searches for a user who has expertise in surgery for this region. may be configured.
ここで、部位に対する専門性は、たとえば所定の部位に関する専門的知見を有することを指し、典型的には、臨床医学の分類に応じた専門性を有することを指す。この専門性の程度は、論文投稿件数や、所属する病院における職位等に基づいて判断されるものであってもよい。
Here, expertise in a region refers to, for example, having specialized knowledge regarding a predetermined region, and typically refers to having expertise according to the classification of clinical medicine. The degree of expertise may be determined based on the number of submitted papers, the position at the hospital to which the person belongs, etc.
図8は、第1のユーザ端末2に示される検索結果を表示する画面の例である。この図8に示されるように、検索実行部334はオペレーション動画に撮影された臓器OG1が膵臓であることを特定し、この特定内容に基づき、膵臓の分野において専門性を有する医師を第2のユーザの候補として提示するように構成されてもよい。
このような表示画面Dに接した第1のユーザは、ボタンBT11、BT21、BT31の少なくとも一つ(複数であってもよい)を押下することにより、動画を共有する第2のユーザを特定することができる。
なお、図8に示される表示画面Dでは、ボタンBT12、BT22、BT32が押下されると、他のオペレーション動画の投稿者からのレビューが表示されるようになっている。このように、本実施形態の情報処理方法においては、第2のユーザに関する評価を、第1のユーザが視認可能に表示させてもよい。 FIG. 8 is an example of a screen displayed on the first user terminal 2 that displays search results. As shown in FIG. 8, the search execution unit 334 specifies that the organ OG1 captured in the operation video is the pancreas, and based on this specific content, searches a second doctor who has expertise in the field of pancreas. It may be configured to be presented as a candidate to the user.
The first user who comes into contact with such a display screen D identifies the second user who will share the video by pressing at least one (or more than one) of the buttons BT11, BT21, and BT31. be able to.
Note that on the display screen D shown in FIG. 8, when buttons BT12, BT22, and BT32 are pressed, reviews from contributors of other operation videos are displayed. In this way, in the information processing method of this embodiment, the evaluation regarding the second user may be displayed so that the first user can view it.
このような表示画面Dに接した第1のユーザは、ボタンBT11、BT21、BT31の少なくとも一つ(複数であってもよい)を押下することにより、動画を共有する第2のユーザを特定することができる。
なお、図8に示される表示画面Dでは、ボタンBT12、BT22、BT32が押下されると、他のオペレーション動画の投稿者からのレビューが表示されるようになっている。このように、本実施形態の情報処理方法においては、第2のユーザに関する評価を、第1のユーザが視認可能に表示させてもよい。 FIG. 8 is an example of a screen displayed on the first user terminal 2 that displays search results. As shown in FIG. 8, the search execution unit 334 specifies that the organ OG1 captured in the operation video is the pancreas, and based on this specific content, searches a second doctor who has expertise in the field of pancreas. It may be configured to be presented as a candidate to the user.
The first user who comes into contact with such a display screen D identifies the second user who will share the video by pressing at least one (or more than one) of the buttons BT11, BT21, and BT31. be able to.
Note that on the display screen D shown in FIG. 8, when buttons BT12, BT22, and BT32 are pressed, reviews from contributors of other operation videos are displayed. In this way, in the information processing method of this embodiment, the evaluation regarding the second user may be displayed so that the first user can view it.
また、本実施形態の情報処理方法では、動画が共有される第2のユーザに対し、対価が支払われるものであってもよい。一方、本実施形態の情報処理方法では、動画が共有される第2のユーザに対し、対価が支払われない(無償である)ものであってもよい。
Furthermore, in the information processing method of the present embodiment, compensation may be paid to the second user with whom the video is shared. On the other hand, in the information processing method of this embodiment, the second user with whom the video is shared may not be paid (free of charge).
ここで本実施形態の情報処理方法が、動画が共有される第2のユーザに対し、対価が支払われるものである場合、図6のアクティビティ図に示したように、本実施形態の情報処理システム1によって、第2のユーザに対する決済が実行されてもよい。
Here, if the information processing method of this embodiment is one in which a consideration is paid to the second user with whom the video is shared, the information processing system of this embodiment as shown in the activity diagram of FIG. 1, payment for the second user may be executed.
具体的に、サーバ3の決済実行部335は、第2のユーザのアノテーションの書き込み及び/又はテキストコメントの付与の対価を、第1のユーザから、第2のユーザに支払う処理を実行する。
この対価は、第1のユーザと第2のユーザとの間で適宜設定すればよい。一例として、この対価の額はアノテーションやコメントの数や質に応じてもよい。また、第2のユーザが、オペレーション動画の対象である身体部位に高い専門性を有するようであれば、その程度に応じて対価を高く設定するように構成されてもよい。
なお、前述の図8に示されるような検索結果とともに、第2のユーザの各候補者に対して共有やコメント付与を依頼する際の対価が明示されるように構成されてもよい。 Specifically, the payment execution unit 335 of the server 3 executes a process in which the first user pays the second user compensation for writing an annotation and/or adding a text comment by the second user.
This consideration may be appropriately set between the first user and the second user. As an example, the amount of consideration may depend on the number and quality of annotations and comments. Furthermore, if the second user has a high degree of expertise in the body part that is the subject of the operation video, the compensation may be set higher depending on the degree of expertise.
It should be noted that, together with the search results as shown in FIG. 8 described above, the compensation for requesting each candidate of the second user to share or add a comment may be clearly displayed.
この対価は、第1のユーザと第2のユーザとの間で適宜設定すればよい。一例として、この対価の額はアノテーションやコメントの数や質に応じてもよい。また、第2のユーザが、オペレーション動画の対象である身体部位に高い専門性を有するようであれば、その程度に応じて対価を高く設定するように構成されてもよい。
なお、前述の図8に示されるような検索結果とともに、第2のユーザの各候補者に対して共有やコメント付与を依頼する際の対価が明示されるように構成されてもよい。 Specifically, the payment execution unit 335 of the server 3 executes a process in which the first user pays the second user compensation for writing an annotation and/or adding a text comment by the second user.
This consideration may be appropriately set between the first user and the second user. As an example, the amount of consideration may depend on the number and quality of annotations and comments. Furthermore, if the second user has a high degree of expertise in the body part that is the subject of the operation video, the compensation may be set higher depending on the degree of expertise.
It should be noted that, together with the search results as shown in FIG. 8 described above, the compensation for requesting each candidate of the second user to share or add a comment may be clearly displayed.
以上に示したように、本実施形態の情報処理方法によれば、アノテーションと、このアノテーションに紐付くテキストコメントが第1のユーザに視認可能であることから、教育を受ける第1のユーザの理解を助けることができる。そのため、医療の教育を効率的に行うことができる。
As described above, according to the information processing method of this embodiment, the annotation and the text comment linked to this annotation are visible to the first user, so that the first user receiving the education can easily understand the information processing method. can help. Therefore, medical education can be provided efficiently.
4.その他
本実施形態に係る情報処理システム1に関して、以下のような態様を採用してもよい。 4. Others Regarding the information processing system 1 according to the present embodiment, the following aspects may be adopted.
本実施形態に係る情報処理システム1に関して、以下のような態様を採用してもよい。 4. Others Regarding the information processing system 1 according to the present embodiment, the following aspects may be adopted.
以上の実施形態では、情報処理システム1の構成として説明したが、コンピュータに情報処理システム1の各ステップを実行させるプログラムが提供されてもよい。
Although the above embodiment has been described as the configuration of the information processing system 1, a program that causes a computer to execute each step of the information processing system 1 may be provided.
以上の実施形態では、第1のユーザが、第2のユーザの候補となる者を検索する態様を示したが、逆に第2のユーザが、自身が指導したい第1のユーザを検索するように構成されてもよい。この場合、たとえばオペレーションに紐付くタグを第1のユーザが設定可能であり、第2のユーザがこのタグを検索することで、第1のユーザを検索することができる。
In the above embodiment, the first user searches for a candidate for the second user, but conversely, the second user searches for the first user that he or she wants to instruct. may be configured. In this case, for example, the first user can set a tag associated with the operation, and the second user can search for the first user by searching for this tag.
以上の実施形態では、サーバ3が種々の記憶や制御を行ったが、サーバ3に代えて、複数の外部装置が用いられてもよい。すなわち、ブロックチェーン技術等を用いて、受講履歴に関する情報等を分散して複数の外部装置に記憶させてもよい。
In the above embodiments, the server 3 performed various storage and control operations, but instead of the server 3, a plurality of external devices may be used. That is, using blockchain technology or the like, information regarding the attendance history may be distributed and stored in a plurality of external devices.
さらに、次に記載の各態様で提供されてもよい。
Furthermore, it may be provided in each of the following embodiments.
(1)医療の教育を支援する情報処理システムであって、制御部を備え、前記制御部は、次の各ステップを実行するように構成され、動画受付ステップでは、第1のユーザから手術のオペレーション動画を受け付け、表示制御ステップでは、前記オペレーション動画を、前記第1のユーザと、前記第1のユーザとは異なる第2のユーザとに、共有動画として視認可能に表示させ、ここで、前記共有動画は、前記第1のユーザ及び/又は前記第2のユーザによるアノテーションの書き込みが可能に構成され、かつ、前記共有動画を表示させる画面では、前記アノテーションに紐付く、前記第1のユーザ及び/又は前記第2のユーザのテキストコメントを付与可能に構成される、情報処理システム。
(1) An information processing system that supports medical education, including a control unit, and the control unit is configured to execute the following steps, and in the video reception step, a video reception step is performed by receiving a video request from a first user. The operation video is received, and in the display control step, the operation video is displayed as a shared video for the first user and a second user different from the first user, and here, the operation video is The shared video is configured to allow the first user and/or the second user to write an annotation, and on the screen that displays the shared video, the first user and/or the annotation associated with the annotation are displayed. An information processing system configured to be able to add/or a text comment of the second user.
(2)上記(1)に記載の情報処理システムにおいて、検索要求受付ステップでは、前記第1のユーザから、前記オペレーション動画を共有するユーザの検索要求を受け付け、検索実行ステップでは、前記検索要求に基づき、前記オペレーション動画を共有する前記ユーザの検索を実行する、情報処理システム。
(2) In the information processing system according to (1) above, in the search request receiving step, a search request of a user who shares the operation video is received from the first user, and in the search execution step, the search request is An information processing system that searches for the user who shares the operation video based on the information processing system.
(3)上記(2)に記載の情報処理システムにおいて、前記検索実行ステップでは、前記オペレーション動画の内容から、オペレーションが実施されている部位を特定し、前記部位に対する専門性を有するユーザを検索する、情報処理システム。
(3) In the information processing system according to (2) above, in the search execution step, the part where the operation is being performed is specified from the contents of the operation video, and a user who has expertise in the part is searched for. , Information Processing Systems.
(4)上記(1)ないし(3)のいずれか1つに記載の情報処理システムにおいて、決済実行ステップでは、前記第2のユーザの前記アノテーションの書き込み及び/又は前記テキストコメントの付与の対価を、前記第1のユーザから、前記第2のユーザに支払う処理を実行する、情報処理システム。
(4) In the information processing system according to any one of (1) to (3) above, in the payment execution step, compensation for writing the annotation and/or adding the text comment by the second user is provided. , an information processing system that executes a process of paying from the first user to the second user.
(5)上記(1)ないし(4)のいずれか1つに記載の情報処理システムにおいて、前記表示制御ステップでは、前記第2のユーザに関する評価を、前記第1のユーザが視認可能に表示させる、情報処理システム。
(5) In the information processing system according to any one of (1) to (4) above, in the display control step, the evaluation regarding the second user is displayed so that the first user can view it. , Information Processing Systems.
(6)上記(1)ないし(5)のいずれか1つに記載の情報処理システムにおいて、前記表示制御ステップでは、前記アノテーションおよび前記テキストコメントは、前記オペレーション動画の時間情報と紐付けられて表示される、情報処理システム。
(6) In the information processing system according to any one of (1) to (5) above, in the display control step, the annotation and the text comment are displayed in association with time information of the operation video. information processing system.
(7)上記(1)ないし(6)のいずれか1つに記載の情報処理システムにおいて、前記手術のオペレーション動画は外科手術のオペレーション動画である、情報処理システム。
(7) The information processing system according to any one of (1) to (6) above, wherein the surgical operation video is a surgical operation video.
(8)情報処理方法であって、上記(1)ないし(7)のいずれか1つに記載の情報処理システムの各ステップを備える、方法。
(8) An information processing method, comprising each step of the information processing system described in any one of (1) to (7) above.
(9)プログラムであって、コンピュータに上記(1)ないし(7)のいずれか1つに記載の情報処理システムの各ステップを実行させる、プログラム。
もちろん、この限りではない。 (9) A program that causes a computer to execute each step of the information processing system according to any one of (1) to (7) above.
Of course, this is not the case.
もちろん、この限りではない。 (9) A program that causes a computer to execute each step of the information processing system according to any one of (1) to (7) above.
Of course, this is not the case.
最後に、本発明に係る種々の実施形態を説明したが、これらは、例として提示したものであり、発明の範囲を限定することは意図していない。当該新規な実施形態は、その他の様々な形態で実施されることが可能であり、発明の要旨を逸脱しない範囲で、種々の省略、置き換え、変更を行うことができる。当該実施形態やその変形は、発明の範囲や要旨に含まれるとともに、特許請求の範囲に記載された発明とその均等の範囲に含まれるものである。
Finally, although various embodiments according to the present invention have been described, these are presented as examples and are not intended to limit the scope of the invention. The new embodiment can be implemented in various other forms, and various omissions, substitutions, and changes can be made without departing from the gist of the invention. The embodiment and its modifications are included within the scope and gist of the invention, and are included within the scope of the invention described in the claims and its equivalents.
1 :情報処理システム
2 :第1のユーザ端末
3 :サーバ
4 :第2のユーザ端末
5 :ネットワーク
20 :通信バス
21 :通信部
22 :記憶部
23 :制御部
24 :表示部
25 :入力部
26 :音声出力部
30 :通信バス
31 :通信部
32 :記憶部
33 :制御部
40 :通信バス
41 :通信部
42 :記憶部
43 :制御部
44 :表示部
45 :入力部
46 :音声出力部
331 :動画受付部
332 :表示制御部
333 :検索要求受付部
334 :検索実行部
335 :決済実行部
336 :記憶管理部
BT1 :ボタン
BT2 :ボタン
BT11 :ボタン
BT12 :ボタン
BT21 :ボタン
BT22 :ボタン
BT31 :ボタン
BT32 :ボタン
D :表示画面
IF1 :入力フォーム
MI1 :医療器具
MI2 :医療器具
MV1 :動画
OBJ1 :オブジェクト
OBJ2 :オブジェクト
OBJ3 :オブジェクト
OBJ4 :オブジェクト
OBJ5 :オブジェクト
OBJ6 :オブジェクト
OBJ7 :オブジェクト
OG1 :臓器 1: Information processing system 2: First user terminal 3: Server 4: Second user terminal 5: Network 20: Communication bus 21: Communication section 22: Storage section 23: Control section 24: Display section 25: Input section 26 :Audio output section 30:Communication bus 31:Communication section 32:Storage section 33:Control section 40:Communication bus 41:Communication section 42:Storage section 43:Control section 44:Display section 45:Input section 46:Audio output section 331 : Video reception unit 332 : Display control unit 333 : Search request reception unit 334 : Search execution unit 335 : Payment execution unit 336 : Storage management unit BT1 : Button BT2 : Button BT11 : Button BT12 : Button BT21 : Button BT22 : Button BT31 : Button BT32: Button D: Display screen IF1: Input form MI1: Medical instrument MI2: Medical instrument MV1: Video OBJ1: Object OBJ2: Object OBJ3: Object OBJ4: Object OBJ5: Object OBJ6: Object OBJ7: Object OG1: Organ
2 :第1のユーザ端末
3 :サーバ
4 :第2のユーザ端末
5 :ネットワーク
20 :通信バス
21 :通信部
22 :記憶部
23 :制御部
24 :表示部
25 :入力部
26 :音声出力部
30 :通信バス
31 :通信部
32 :記憶部
33 :制御部
40 :通信バス
41 :通信部
42 :記憶部
43 :制御部
44 :表示部
45 :入力部
46 :音声出力部
331 :動画受付部
332 :表示制御部
333 :検索要求受付部
334 :検索実行部
335 :決済実行部
336 :記憶管理部
BT1 :ボタン
BT2 :ボタン
BT11 :ボタン
BT12 :ボタン
BT21 :ボタン
BT22 :ボタン
BT31 :ボタン
BT32 :ボタン
D :表示画面
IF1 :入力フォーム
MI1 :医療器具
MI2 :医療器具
MV1 :動画
OBJ1 :オブジェクト
OBJ2 :オブジェクト
OBJ3 :オブジェクト
OBJ4 :オブジェクト
OBJ5 :オブジェクト
OBJ6 :オブジェクト
OBJ7 :オブジェクト
OG1 :臓器 1: Information processing system 2: First user terminal 3: Server 4: Second user terminal 5: Network 20: Communication bus 21: Communication section 22: Storage section 23: Control section 24: Display section 25: Input section 26 :Audio output section 30:Communication bus 31:Communication section 32:Storage section 33:Control section 40:Communication bus 41:Communication section 42:Storage section 43:Control section 44:Display section 45:Input section 46:Audio output section 331 : Video reception unit 332 : Display control unit 333 : Search request reception unit 334 : Search execution unit 335 : Payment execution unit 336 : Storage management unit BT1 : Button BT2 : Button BT11 : Button BT12 : Button BT21 : Button BT22 : Button BT31 : Button BT32: Button D: Display screen IF1: Input form MI1: Medical instrument MI2: Medical instrument MV1: Video OBJ1: Object OBJ2: Object OBJ3: Object OBJ4: Object OBJ5: Object OBJ6: Object OBJ7: Object OG1: Organ
Claims (9)
- 医療の教育を支援する情報処理システムであって、
制御部を備え、
前記制御部は、次の各ステップを実行するように構成され、
動画受付ステップでは、第1のユーザから手術のオペレーション動画を受け付け、
表示制御ステップでは、前記オペレーション動画を、前記第1のユーザと、前記第1のユーザとは異なる第2のユーザとに、共有動画として視認可能に表示させ、
ここで、前記共有動画は、前記第1のユーザ及び/又は前記第2のユーザによるアノテーションの書き込みが可能に構成され、かつ、
前記共有動画を表示させる画面では、前記アノテーションに紐付く、前記第1のユーザ及び/又は前記第2のユーザのテキストコメントを付与可能に構成される、情報処理システム。 An information processing system that supports medical education,
Equipped with a control unit,
The control unit is configured to execute the following steps,
In the video reception step, a surgical operation video is accepted from the first user,
In the display control step, the operation video is displayed so as to be visible to the first user and a second user different from the first user as a shared video;
Here, the shared video is configured to allow the first user and/or the second user to write an annotation, and
The information processing system is configured to be able to add a text comment of the first user and/or the second user that is linked to the annotation on a screen on which the shared video is displayed. - 請求項1に記載の情報処理システムにおいて、
検索要求受付ステップでは、前記第1のユーザから、前記オペレーション動画を共有するユーザの検索要求を受け付け、
検索実行ステップでは、前記検索要求に基づき、前記オペレーション動画を共有する前記ユーザの検索を実行する、情報処理システム。 The information processing system according to claim 1,
In the search request receiving step, receiving a search request from the first user for a user who shares the operation video,
In the search execution step, the information processing system executes a search for the user who shares the operation video based on the search request. - 請求項2に記載の情報処理システムにおいて、
前記検索実行ステップでは、前記オペレーション動画の内容から、オペレーションが実施されている部位を特定し、前記部位に対する専門性を有するユーザを検索する、情報処理システム。 The information processing system according to claim 2,
In the search execution step, the information processing system identifies a part where the operation is being performed from the contents of the operation video, and searches for a user who has expertise in the part. - 請求項1ないし請求項3のいずれか1項に記載の情報処理システムにおいて、
決済実行ステップでは、前記第2のユーザの前記アノテーションの書き込み及び/又は前記テキストコメントの付与の対価を、前記第1のユーザから、前記第2のユーザに支払う処理を実行する、情報処理システム。 The information processing system according to any one of claims 1 to 3,
In the payment execution step, the information processing system executes a process in which the first user pays the second user compensation for writing the annotation and/or adding the text comment by the second user. - 請求項1ないし請求項4のいずれか1項に記載の情報処理システムにおいて、
前記表示制御ステップでは、前記第2のユーザに関する評価を、前記第1のユーザが視認可能に表示させる、情報処理システム。 The information processing system according to any one of claims 1 to 4,
In the display control step, the information processing system displays the evaluation regarding the second user so that the first user can view it. - 請求項1ないし請求項5のいずれか1項に記載の情報処理システムにおいて、
前記表示制御ステップでは、前記アノテーションおよび前記テキストコメントは、前記オペレーション動画の時間情報と紐付けられて表示される、情報処理システム。 The information processing system according to any one of claims 1 to 5,
In the display control step, the annotation and the text comment are displayed in association with time information of the operation video. - 請求項1ないし請求項6のいずれか1項に記載の情報処理システムにおいて、
前記手術のオペレーション動画は外科手術のオペレーション動画である、情報処理システム。 The information processing system according to any one of claims 1 to 6,
An information processing system, wherein the surgical operation video is a surgical operation video. - 情報処理方法であって、
請求項1ないし請求項7のいずれか1項に記載の情報処理システムの各ステップを備える、方法。 An information processing method,
A method comprising each step of the information processing system according to any one of claims 1 to 7. - プログラムであって、
コンピュータに請求項1ないし請求項7のいずれか1項に記載の情報処理システムの各ステップを実行させる、プログラム。 A program,
A program that causes a computer to execute each step of the information processing system according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2022/032390 WO2024047696A1 (en) | 2022-08-29 | 2022-08-29 | Information processing system, information processing method, and program |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2022/032390 WO2024047696A1 (en) | 2022-08-29 | 2022-08-29 | Information processing system, information processing method, and program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024047696A1 true WO2024047696A1 (en) | 2024-03-07 |
Family
ID=90099086
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/032390 WO2024047696A1 (en) | 2022-08-29 | 2022-08-29 | Information processing system, information processing method, and program |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2024047696A1 (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002207832A (en) * | 2000-12-28 | 2002-07-26 | Atsushi Takahashi | Distribution system of internet technology instruction education, and instruction system using communication network |
WO2005093687A1 (en) * | 2004-03-26 | 2005-10-06 | Atsushi Takahashi | 3d entity digital magnifying glass system having 3d visual instruction function |
KR20150117165A (en) * | 2014-04-09 | 2015-10-19 | (주)라파로넷 | Internet based educational information providing system of surgical techniques and skills, and providing Method thereof |
JP2019125211A (en) * | 2018-01-17 | 2019-07-25 | 株式会社教育ネット | Intra-pseudo-identical-space class system |
JP2019162339A (en) * | 2018-03-20 | 2019-09-26 | ソニー株式会社 | Surgery supporting system and display method |
US20200273359A1 (en) * | 2019-02-26 | 2020-08-27 | Surg Time, Inc. | System and method for teaching a surgical procedure |
-
2022
- 2022-08-29 WO PCT/JP2022/032390 patent/WO2024047696A1/en unknown
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002207832A (en) * | 2000-12-28 | 2002-07-26 | Atsushi Takahashi | Distribution system of internet technology instruction education, and instruction system using communication network |
WO2005093687A1 (en) * | 2004-03-26 | 2005-10-06 | Atsushi Takahashi | 3d entity digital magnifying glass system having 3d visual instruction function |
KR20150117165A (en) * | 2014-04-09 | 2015-10-19 | (주)라파로넷 | Internet based educational information providing system of surgical techniques and skills, and providing Method thereof |
JP2019125211A (en) * | 2018-01-17 | 2019-07-25 | 株式会社教育ネット | Intra-pseudo-identical-space class system |
JP2019162339A (en) * | 2018-03-20 | 2019-09-26 | ソニー株式会社 | Surgery supporting system and display method |
US20200273359A1 (en) * | 2019-02-26 | 2020-08-27 | Surg Time, Inc. | System and method for teaching a surgical procedure |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12033754B2 (en) | Systems and methods for and displaying patient data | |
CN109346158A (en) | Ultrasonic image processing method, computer equipment and readable storage medium storing program for executing | |
CN116344071A (en) | Informatics platform for integrating clinical care | |
US20120278759A1 (en) | Integration system for medical instruments with remote control | |
US20190266495A1 (en) | Database systems and interactive user interfaces for dynamic conversational interactions | |
CN104685449A (en) | User interface element focus based on user's gaze | |
JP2009238038A (en) | Medical report system, medical report browse device, medical report program, and method of browsing medical report | |
US20150193584A1 (en) | System and method for clinical procedure timeline tracking | |
Wang et al. | SurfaceSlide: a multitouch digital pathology platform | |
EP3416039A1 (en) | Image processing device, image processing system, and image processing method | |
DE112016002384T5 (en) | Auxiliary layer with automated extraction | |
KR20210067999A (en) | Method and device that provides patient transfer and mediation service | |
JP2014119866A (en) | Medical information processing apparatus and program | |
KR20200024374A (en) | Method and electronic device for matching medical service | |
US20130246067A1 (en) | User interface for producing automated medical reports and a method for updating fields of such interface on the fly | |
Pfaff et al. | Analysis of the cognitive demands of electronic health record use | |
CN109635304A (en) | Multi-language system data processing method and device | |
WO2024047696A1 (en) | Information processing system, information processing method, and program | |
US11768573B2 (en) | Graphical user interface marking feedback | |
BR112020018877A2 (en) | METHOD, LOCAL COMPUTER DEVICE AND LEGIBLE STORAGE MEDIA BY NON-TRANSITIONAL COMPUTER FOR TRANSMITTING FILES THROUGH A WEB SOCKET CONNECTION IN A NETWORK COLLABORATION WORK SPACE | |
US20100125196A1 (en) | Ultrasonic Diagnostic Apparatus And Method For Generating Commands In Ultrasonic Diagnostic Apparatus | |
JP6947253B2 (en) | Electronic medical record system and electronic medical record program | |
US20160378929A1 (en) | Team medical support device, method for controlling team medical support device and team medical support system | |
CN113391737A (en) | Interface display control method and device, storage medium and electronic equipment | |
KR102152940B1 (en) | Medical practice contents cps(contents sercice platform) interworking interface |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22957300 Country of ref document: EP Kind code of ref document: A1 |