Nothing Special   »   [go: up one dir, main page]

CN114449701A - Medical record information-based light type regulation and control method and device and electronic equipment - Google Patents

Medical record information-based light type regulation and control method and device and electronic equipment Download PDF

Info

Publication number
CN114449701A
CN114449701A CN202111636583.3A CN202111636583A CN114449701A CN 114449701 A CN114449701 A CN 114449701A CN 202111636583 A CN202111636583 A CN 202111636583A CN 114449701 A CN114449701 A CN 114449701A
Authority
CN
China
Prior art keywords
information
pet
candidate disease
matrix
medical record
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111636583.3A
Other languages
Chinese (zh)
Other versions
CN114449701B (en
Inventor
彭永鹤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New Ruipeng Pet Healthcare Group Co Ltd
Original Assignee
New Ruipeng Pet Healthcare Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New Ruipeng Pet Healthcare Group Co Ltd filed Critical New Ruipeng Pet Healthcare Group Co Ltd
Priority to CN202111636583.3A priority Critical patent/CN114449701B/en
Publication of CN114449701A publication Critical patent/CN114449701A/en
Application granted granted Critical
Publication of CN114449701B publication Critical patent/CN114449701B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/165Controlling the light source following a pre-assigned programmed sequence; Logic control [LC]

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The application discloses a method and a device for regulating and controlling light types based on medical record information and electronic equipment, wherein the method comprises the following steps: acquiring state information of a pet to be checked, and performing feature extraction on the state information to obtain a first feature vector; determining the medical record of the pet to be detected in a medical record library according to the pet information of the pet to be detected; determining candidate disease information corresponding to the pet to be detected according to the first feature vector and the medical record of the pet to be detected; determining an examining doctor of the pet to be examined according to the candidate disease information, and acquiring examination habit information of the examining doctor; generating inspection flow information according to the candidate disease information and the inspection habit information; and generating a light type regulation and control instruction according to the inspection flow information, and regulating and controlling the light type of the inspection lamp according to the light type regulation and control instruction.

Description

Medical record information-based light type regulation and control method and device and electronic equipment
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a method and a device for regulating and controlling light types based on medical record information and electronic equipment.
Background
With the improvement of living standard of people, more families feed pets, and some families regard the pets as one part of the families. Due to life reasons, a pet owner usually cannot take care of the pet in time when working out or carrying the pet to play, so that the pet is injured or ill. At present, when a pet is examined for injury, different examination lamps are often selected for different diseases to be examined, for example: a daylight inspection lamp, an infrared inspection lamp, a wood type ultraviolet lamp, and the like. However, since the real disease cannot be determined before the examination is completed, the corresponding lighting equipment needs to be called separately during the corresponding examination in one examination, so that the pet needs to be pacified again when the lighting equipment is replaced, and the examination efficiency is very affected.
Disclosure of Invention
In order to solve the problems in the prior art, the embodiment of the application provides a method and a device for regulating and controlling the light type based on medical record information and electronic equipment, various inspection lights are integrated into a whole, candidate diseases are determined independently according to the medical record information of a pet, and the type of the inspection light is regulated independently by combining the habit of an inspection doctor, so that the doctor does not need to change or control an inspection lamp with a distraction, and the inspection efficiency is improved.
In a first aspect, an embodiment of the present application provides a method for regulating and controlling a light type based on medical record information, including:
acquiring state information of a pet to be checked, and performing feature extraction on the state information to obtain a first feature vector;
determining the medical record of the pet to be detected in a medical record library according to the pet information of the pet to be detected;
determining candidate disease information corresponding to the pet to be detected according to the first feature vector and the medical record of the pet to be detected;
determining an examining doctor of the pet to be examined according to the candidate disease information, and acquiring examination habit information of the examining doctor;
generating inspection flow information according to the candidate disease information and the inspection habit information;
and generating a light type regulating and controlling instruction according to the inspection flow information, and regulating and controlling the light type of the inspection lamp according to the light type regulating and controlling instruction.
In a second aspect, an embodiment of the present application provides a device for regulating and controlling a light type based on medical record information, including:
the extraction module is used for acquiring the state information of the pet to be detected and extracting the features of the state information to obtain a first feature vector;
the matching module is used for determining the medical record of the pet to be detected in the medical record library according to the pet information of the pet to be detected and determining candidate disease information corresponding to the pet to be detected according to the first characteristic vector and the medical record of the pet to be detected;
the processing module is used for determining an examination doctor of the pet to be examined according to the candidate disease information, acquiring examination habit information of the examination doctor and generating examination flow information according to the candidate disease information and the examination habit information;
and the regulation and control module is used for generating a light type regulation and control instruction according to the inspection flow information and regulating and controlling the light type of the inspection lamp according to the light type regulation and control instruction.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor coupled to the memory, the memory for storing a computer program, the processor for executing the computer program stored in the memory to cause the electronic device to perform the method of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having a computer program stored thereon, the computer program causing a computer to perform the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program, the computer operable to cause the computer to perform a method according to the first aspect.
The implementation of the embodiment of the application has the following beneficial effects:
in the embodiment of the application, various inspection lamps are integrated, the state information of the pet to be inspected is acquired, the state information is subjected to feature extraction to obtain a first feature vector, and then the medical record of the pet to be inspected is determined in the medical record library according to the pet information of the pet to be inspected. Based on the information, the candidate disease information corresponding to the pet to be detected is determined according to the first characteristic vector and the medical record of the pet to be detected. Then, an examining doctor who examines the pet to be examined can be determined based on the candidate condition information, thereby obtaining examination habit information of the examining doctor. And finally, generating inspection flow information according to the candidate disease information and the inspection habit information, and generating a light type regulation and control instruction according to the inspection flow information to regulate and control the light type of the inspection lamp. Therefore, suspected candidate disorders can be autonomously determined according to the medical record information and the state information of the pet, and the type of the inspection light can be autonomously adjusted by combining the inspection habit of the inspection doctor corresponding to the candidate disorders, so that the doctor does not need to replace or control the inspection lamp with distraction, and the inspection efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic diagram of a hardware structure of a medical record information-based lighting type adjusting and controlling device according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a method for regulating and controlling lighting types based on medical record information according to an embodiment of the present application;
FIG. 3 is a schematic flow chart illustrating a method for obtaining status information of a pet to be examined according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a topological relation diagram provided in an embodiment of the present application;
fig. 5 is a block diagram of functional modules of a medical record information-based lighting type control device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments obtained by a person of ordinary skill in the art without any inventive work based on the embodiments in the present application are within the scope of protection of the present application.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, result, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
First, referring to fig. 1, fig. 1 is a schematic diagram of a hardware structure of a medical record information-based lighting type control device according to an embodiment of the present application. The medical record information-based light type regulating and controlling device 100 comprises at least one processor 101, a communication line 102, a memory 103 and at least one communication interface 104.
In this embodiment, the processor 101 may be a general-purpose Central Processing Unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more ics for controlling the execution of programs according to the present disclosure.
The communication link 102, which may include a path, carries information between the aforementioned components.
The communication interface 104 may be any transceiver or other device (e.g., an antenna, etc.) for communicating with other devices or communication networks, such as an ethernet, RAN, Wireless Local Area Network (WLAN), etc.
The memory 103 may be, but is not limited to, a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that can store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
In this embodiment, the memory 103 may be independent and connected to the processor 101 through the communication line 102. The memory 103 may also be integrated with the processor 101. The memory 103 provided in the embodiments of the present application may generally have a nonvolatile property. The memory 103 is used for storing computer-executable instructions for executing the scheme of the application, and is controlled by the processor 101 to execute. The processor 101 is configured to execute computer-executable instructions stored in the memory 103, thereby implementing the methods provided in the embodiments of the present application described below.
In alternative embodiments, computer-executable instructions may also be referred to as application code, which is not specifically limited in this application.
In alternative embodiments, processor 101 may include one or more CPUs, such as CPU0 and CPU1 of FIG. 1.
In an alternative embodiment, the lighting type control device 100 based on medical record information may include a plurality of processors, such as the processor 101 and the processor 107 in fig. 1. Each of these processors may be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
In an optional embodiment, if the lighting type adjustment and control device 100 based on medical record information is a server, for example, the device may be an independent server, or may be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like. The medical record information-based light type control device 100 can further include an output device 105 and an input device 106. The output device 105 is in communication with the processor 101 and may display information in a variety of ways. For example, the output device 105 may be a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display device, a Cathode Ray Tube (CRT) display device, a projector (projector), or the like. The input device 106 is in communication with the processor 101 and may receive user input in a variety of ways. For example, the input device 106 may be a mouse, a keyboard, a touch screen device, or a sensing device, among others.
The medical record information-based light type control device 100 can be a general device or a special device. The embodiment of the present application does not limit the type of the lighting type control device 100 based on medical record information.
Next, it should be noted that the embodiments disclosed in the present application may acquire and process related data based on artificial intelligence technology. Among them, Artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
The method for regulating and controlling the light type based on medical record information disclosed by the application is described as follows:
referring to fig. 2, fig. 2 is a schematic flowchart of a lighting type adjustment and control method based on medical record information according to an embodiment of the present application. The medical record information-based light type regulation and control method comprises the following steps of:
201: acquiring state information of the pet to be checked, and performing feature extraction on the state information to obtain a first feature vector.
In this embodiment, the state information of the pet can be understood as information of behavior, emotion, cry, and the like of the pet in the current time period, and can be obtained by inducing the pet in a manner of giving external stimulation. Illustratively, the embodiment proposes a method for acquiring the state information of a pet to be inspected, as shown in fig. 3, the method comprising:
301: and acquiring real-time video stream data containing the pet to be inspected.
In this embodiment, the real-time video stream data may be obtained in real time by the camera device after the pet to be inspected is subjected to a preset stimulus, and the preset stimulus may be determined according to the type, sex, age, and state of the pet to be inspected before the stimulus is applied. Meanwhile, the real-time video stream data may include a plurality of video frames.
302: at least one key frame is determined among a plurality of video frames.
In this embodiment, each of the at least one key frame may be a video frame of the pet to be examined that responds abnormally to the stimulus. For example, feature extraction may be performed on each video frame, so as to obtain the posture, facial expression, and the like of the pet to be detected in each video frame. And then, the reflecting conditions of other pets of the same type, the same sex and the similar age in the historical database under the same stimulus are combined for comparison, so that the video frame which belongs to the abnormal reflection is determined to be used as a key frame in the plurality of video frames.
In an optional implementation mode, the audio frame and the corresponding video frame can be combined by combining the audio track, feature extraction is performed, the obtained features are compared with the features of normal behaviors in a historical database, and then the video frame which is reflected abnormally is screened out to serve as a key frame.
303: and extracting the features of each key frame to obtain at least one second feature vector corresponding to at least one key frame one by one.
In the present embodiment, although a single video frame can show that the pet has an abnormal state, it cannot show the cause of the abnormal state, or the process of the pet changing from the normal state to the abnormal state. Therefore, if only the feature extraction is performed on the key frame, the obtained second feature vector also has the above-mentioned defects, which results in a reduction in the accuracy of the subsequent prediction.
Based on this, in this embodiment, each key frame and the first n-1 frames of video frames of each key frame may be extracted to obtain n frames of video frames corresponding to each key frame, where n is an integer greater than or equal to 1. And then, carrying out feature extraction on the n frames of video frames to obtain n third feature vectors which are in one-to-one correspondence with the n frames of video frames. And finally, splicing the n third feature vectors according to the arrangement sequence of the n video frames corresponding to the n third feature vectors to obtain a second feature vector corresponding to each key frame. Therefore, the characteristics of the first n-1 frames of each key frame are integrated into the characteristics of the key frame, so that the characteristics of the first n-1 frames are included in the characteristics, namely the development process of the event is included, and the problem is solved.
In an optional embodiment, the n third feature vectors may be spliced in a longitudinal splicing manner. Illustratively, assuming that n is equal to 5 and the key frame is the fifth frame, the first 4 frames are [ 1 st frame, 2 nd frame, 3 rd frame, 4 th frame ], respectively. After feature extraction, a 1 st feature vector corresponding to the 1 st frame, a 2 nd feature vector corresponding to the 2 nd frame, a 3 rd feature vector corresponding to the 3 rd frame, and a 4 th feature vector corresponding to the 4 th frame are obtained, and then a second feature vector corresponding to the key frame can be represented by a formula (I):
Figure BDA0003441640990000081
where p represents the second feature vector.
304: and establishing a topological relation graph according to the at least one second characteristic vector, and taking the topological relation graph as the state information of the pet to be detected.
In this embodiment, each of the at least one second feature vector may be regarded as a node in the topological relation graph. Then, a degree of association between any two different second feature vectors is calculated, for example, by calculating a euclidean distance or a mahalanobis distance between the two as the degree of association between the two. And finally, connecting the nodes corresponding to the two different second eigenvectors with the association degree larger than the preset threshold value through line segments to obtain the topological relation graph.
In this way, in the present embodiment, since the state information of the pet to be inspected is a topological relation diagram, it is possible to perform the feature extraction of the state information by a graph convolution method.
Illustratively, first, a first adjacency matrix may be constructed from a topological relational graph. Specifically, each point in the topological relation graph may be taken as a horizontal axis and a vertical axis of the matrix, respectively, and when there is a relation between two points, that is, there is a connection line between two points, the intersection position of the horizontal axis and the vertical axis corresponding to the two points is set to 1, otherwise, 0 is set, thereby obtaining the first adjacency matrix. Illustratively, fig. 4 shows a topological relationship diagram, and taking fig. 4 as an example, the following first adjacency matrix may be obtained:
Figure BDA0003441640990000082
then, as can be seen from the above-mentioned construction manner of the first adjacency matrix, in the present embodiment, the first adjacency matrix ignores the feature of each point itself in the topological relation diagram. Therefore, the feature of each point can be added into the first adjacency matrix by means of feature addition, so as to obtain the second adjacency matrix.
Specifically, the first adjacent matrix may be simply added with the feature by superimposing the identity matrix, and any other method that can add the feature to the first adjacent matrix may be applied to this embodiment, which is not limited in this embodiment.
A first degree matrix may then be determined from the second adjacency matrix. In this embodiment, the second adjacency matrix and the first degree matrix can be expressed by a formula (ii):
Figure BDA0003441640990000091
wherein,
Figure BDA0003441640990000092
representing the element in the nth row and column of the first degree matrix,
Figure BDA0003441640990000093
and the elements of the nth row and the mth column in the second adjacent matrix are represented, and n and m are integers greater than or equal to 1.
And finally, carrying out graph convolution processing according to the second adjacent matrix and the first degree matrix to obtain a first feature vector. Specifically, an inverse matrix of the first degree matrix may be obtained, and the inverse matrix may be subjected to an squaring process to obtain the first matrix. And then inputting the learning matrix corresponding to the first matrix, the second adjacent matrix and the graph convolution processing into an activation function to obtain a first characteristic vector, wherein the activation function can satisfy the following expression by a formula (c):
Figure BDA0003441640990000094
where, ζ represents a sigmoid activation function,
Figure BDA0003441640990000095
a first degree matrix is represented that represents a first degree matrix,
Figure BDA0003441640990000096
denotes a second adjacency matrix, and W denotes a learning matrix.
202: and determining the medical record of the pet to be detected in the medical record library according to the pet information of the pet to be detected.
In the present embodiment, the pet information may include the name, type, sex, age, and the like of the pet, and thus the medical record of the pet to be examined can be obtained by comparing the pet information with the pet information described in each medical record in the medical record library.
203: and determining candidate disease information corresponding to the pet to be detected according to the first feature vector and the medical record of the pet to be detected.
In this embodiment, the suspected disease and the corresponding probability can be obtained by matching the medical record of the pet to be detected and the desired disease library according to the first feature vector. Then, selecting the intersection of the matched result in the medical record and the matched result in the disease symptoms library, filtering the obtained result, and multiplying the probability corresponding to each disease symptom in the intersection to be used as the new probability of each disease symptom. And finally, taking the disease condition which is greater than or equal to the first threshold value in the new probability as candidate disease condition information.
Specifically, firstly, matching can be performed in the medical record of the pet to be detected according to the first feature vector to obtain at least one piece of first candidate disease information and at least one piece of first probability which are in one-to-one correspondence; and meanwhile, matching is carried out in a disease library according to the first feature vector to obtain at least one piece of second candidate disease information and at least one piece of second probability which are in one-to-one correspondence. And then, performing intersection processing on the at least one first candidate disease information and the at least one second candidate disease information to obtain at least one third candidate disease information. And multiplying the first probability value and the second probability value corresponding to each third candidate disease information in the at least one third candidate disease information to obtain at least one third probability value corresponding to the at least one third candidate disease information one to one. And finally, determining candidate disease state information corresponding to the pet to be checked in the at least one third candidate disease state information according to the at least one third probability value, wherein the third probability value corresponding to the candidate disease state information corresponding to the pet to be checked is larger than or equal to the first threshold value.
Thus, through the screening of the disease library, the maximum disease set corresponding to the characteristics can be found; through the screening of the medical records, a special disease set corresponding to the pet to be detected and the characteristics can be found out. And then obtaining a group of symptoms with highest probability by means of intersection, thereby taking the symptoms of which the probability meets the condition as candidate symptoms, and greatly improving the precision of selecting the symptoms.
204: and determining the examination doctor of the pet to be examined according to the candidate disease information, and acquiring examination habit information of the examination doctor.
In this embodiment, the candidate disease condition information determined in step 203 may be a plurality of information, and therefore, a doctor list of doctors who can check and treat the candidate disease condition can be obtained from each of the candidate disease condition information. Each doctor in the list also corresponds to a recommendation coefficient, which may be determined by the working age, working evaluation, etc. of the doctor.
Therefore, a new doctor list is obtained by solving the intersection of the doctor lists corresponding to the candidate disease information. And carrying out weighted summation on the recommendation coefficients corresponding to all doctors in the doctor list in the intersection to obtain a new recommendation coefficient, wherein the weight can be the probability of each candidate disease.
And finally, sorting the doctors in the new doctor list according to the new recommendation coefficients from high to low, and confirming the journey of each doctor in turn. When it is confirmed that there is a doctor who has not scheduled the pet, the doctor is regarded as an examining doctor of the pet to be examined.
Specifically, it is assumed that the candidate disorder information determined by step 203 is: [ disorder 1, 0.95; disorder 2, 0.90; disorder 3, 0.85], then by looking up for disorder 1, the list of physicians who can check and treat disorder 1 is: [ doctor 1, 95 points; doctor 2, 90 points ]; the list of doctors who can check and treat condition 2 is found by looking up condition 2: [ doctor 3, 95 points; doctor 1, 90 points ]; the list of doctors that can check and treat condition 3 is found by looking up condition 3: [ doctor 1, 95 points; doctor 3, 90 points ]. Therefore, after intersection solving processing is carried out, the obtained doctor list is as follows: a doctor 1. Meanwhile, the new recommendation coefficient of the doctor 1 can be expressed by a formula (iv):
Figure BDA0003441640990000111
where k denotes the new recommendation coefficient, hiIndicates the probability of the corresponding disorder i, giAnd j represents the recommendation coefficient in the doctor list corresponding to the doctor disease i, j represents the total number of candidate disease information, and i and j are integers greater than or equal to 1.
Based on this, in the above example, the new recommendation coefficients of doctor 1 are: 0.95X95+0.90X90+0.85X95 ═ 252. Thus, the obtained new doctor list is: [ doctor 1,252 point ]. At this time, since there is only one doctor in the list, when it is determined that the doctor does not have a journey currently, the doctor can be used as the examining doctor of the pet to be examined.
Meanwhile, in the present embodiment, the examination habit information of the examining doctor can be obtained by analyzing the historical examination data thereof.
205: and generating examination flow information according to the candidate disease information and the examination habit information.
In this embodiment, at least one inspection item is determined according to the candidate disease information, and the at least one inspection item is sorted according to the inspection habit information to obtain the inspection flow information.
206: and generating a light type regulating and controlling instruction according to the inspection flow information, and regulating and controlling the light type of the inspection lamp according to the light type regulating and controlling instruction.
In summary, in the method for regulating and controlling the light type based on the medical record information provided by the invention, various inspection lights are integrated, the state information of the pet to be inspected is obtained to perform feature extraction on the state information to obtain a first feature vector, and then the medical record of the pet to be inspected is determined in the medical record library according to the pet information of the pet to be inspected. Based on the information, the candidate disease information corresponding to the pet to be detected is determined according to the first characteristic vector and the medical record of the pet to be detected. Then, an examining doctor who examines the pet to be examined can be determined based on the candidate condition information, thereby obtaining examination habit information of the examining doctor. And finally, generating inspection flow information according to the candidate disease information and the inspection habit information, and generating a light type regulation and control instruction according to the inspection flow information to regulate and control the light type of the inspection lamp. Therefore, suspected candidate symptoms are autonomously determined according to the medical record information and the state information of the pet, and the type of the inspection light is autonomously adjusted by combining the inspection habit of the inspection doctor corresponding to the candidate symptoms, so that the doctor does not need to replace or control the inspection lamp with a distraction, and the inspection efficiency is improved.
Referring to fig. 5, fig. 5 is a block diagram illustrating functional modules of a lighting type control device based on medical record information according to an embodiment of the present disclosure. As shown in fig. 5, the lighting type control device 500 based on medical record information includes:
the extracting module 501 is configured to obtain state information of a pet to be inspected, and perform feature extraction on the state information to obtain a first feature vector;
the matching module 502 is used for determining the medical record of the pet to be detected in the medical record library according to the pet information of the pet to be detected, and determining candidate disease information corresponding to the pet to be detected according to the first characteristic vector and the medical record of the pet to be detected;
the processing module 503 is configured to determine an examining doctor of the pet to be examined according to the candidate disease information, acquire examination habit information of the examining doctor, and generate examination flow information according to the candidate disease information and the examination habit information;
and the regulation and control module 504 is configured to generate a light type regulation and control instruction according to the inspection flow information, and regulate and control the light type of the inspection lamp according to the light type regulation and control instruction.
In an embodiment of the present invention, in obtaining the status information of the pet to be detected, the extracting module 501 is specifically configured to:
the method comprises the steps of obtaining real-time video stream data containing a pet to be detected, wherein the real-time video stream data are obtained in real time through a camera device after preset stimulation is carried out on the pet to be detected, and the real-time video stream data comprise a plurality of video frames;
determining at least one key frame in a plurality of video frames, wherein each key frame in the at least one key frame is a video frame of a pet to be detected which generates abnormal response to the stimulus;
extracting features of each key frame to obtain at least one second feature vector, wherein the at least one second feature vector corresponds to the at least one key frame one to one;
and establishing a topological relation graph according to the at least one second characteristic vector, and taking the topological relation graph as the state information of the pet to be detected.
In an embodiment of the present invention, in performing feature extraction on each key frame to obtain at least one second feature vector, the extraction module 501 is specifically configured to:
extracting each key frame and the first n-1 frames of video frames of each key frame to obtain n frames of video frames corresponding to each key frame, wherein n is an integer greater than or equal to 1;
performing feature extraction on n frames of video frames to obtain n third feature vectors, wherein the n third feature vectors correspond to the n frames of video frames one to one;
and splicing the n third feature vectors to obtain a second feature vector corresponding to each key frame.
In an embodiment of the present invention, in terms of performing feature extraction on the state information according to the brightness adjustment value of each pixel to obtain a first feature vector, the extraction module 501 is specifically configured to:
constructing a first adjacency matrix according to the topological relation graph;
adding features of the first adjacency matrix to obtain a second adjacency matrix;
determining a first degree matrix according to the second adjacency matrix;
and performing graph convolution processing according to the second adjacent matrix and the first degree matrix to obtain a first feature vector.
In an embodiment of the present invention, in terms of obtaining the first feature vector by performing graph convolution processing according to the second adjacency matrix and the first degree matrix, the extracting module 501 is specifically configured to:
obtaining an inverse matrix of the first degree matrix, and performing evolution processing on the inverse matrix to obtain a first matrix;
inputting the learning matrix corresponding to the first matrix, the second adjacent matrix and the graph convolution processing into an activation function to obtain a first feature vector, wherein the activation function can be expressed by a formula (v):
Figure BDA0003441640990000131
where, ζ represents a sigmoid activation function,
Figure BDA0003441640990000132
a first degree matrix is represented by a first degree matrix,
Figure BDA0003441640990000133
denotes a second adjacency matrix, and W denotes a learning matrix.
In an embodiment of the present invention, in determining candidate disease information corresponding to a pet to be detected according to the first feature vector and a medical record of the pet to be detected, the matching module 502 is specifically configured to:
matching in the medical record of the pet to be detected according to the first feature vector to obtain at least one first candidate disease information and at least one first probability, wherein the at least one first candidate disease information corresponds to the at least one first probability one to one;
matching in a disease library according to the first feature vector to obtain at least one second candidate disease information and at least one second probability, wherein the at least one second candidate disease information and the at least one second probability are in one-to-one correspondence;
performing intersection processing on the at least one first candidate disease information and the at least one second candidate disease information to obtain at least one third candidate disease information;
multiplying a first probability value and a second probability value corresponding to each piece of third candidate disease information in the at least one piece of third candidate disease information to obtain at least one third probability value, wherein the at least one third probability value is in one-to-one correspondence with the at least one piece of third candidate disease information;
and determining candidate disease state information corresponding to the pet to be checked in the at least one third candidate disease state information according to the at least one third probability value, wherein the third probability value corresponding to the candidate disease state information corresponding to the pet to be checked is larger than or equal to the first threshold value.
In an embodiment of the present invention, in generating the examination flow information according to the candidate disease information and the examination habit information, the control module 504 is specifically configured to:
determining at least one examination item according to the candidate disease information;
and sequencing at least one inspection item according to the inspection habit information to obtain inspection flow information.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 6, the electronic device 600 includes a transceiver 601, a processor 602, and a memory 603. Connected to each other by a bus 604. The memory 603 is used to store computer programs and data, and can transfer data stored in the memory 603 to the processor 602.
The processor 602 is configured to read the computer program in the memory 603 to perform the following operations:
acquiring state information of a pet to be checked, and performing feature extraction on the state information to obtain a first feature vector;
determining the medical record of the pet to be detected in a medical record library according to the pet information of the pet to be detected;
determining candidate disease information corresponding to the pet to be detected according to the first feature vector and the medical record of the pet to be detected;
determining an examining doctor of the pet to be examined according to the candidate disease information, and acquiring examination habit information of the examining doctor;
generating inspection flow information according to the candidate disease information and the inspection habit information;
and generating a light type regulating and controlling instruction according to the inspection flow information, and regulating and controlling the light type of the inspection lamp according to the light type regulating and controlling instruction.
In an embodiment of the present invention, in obtaining the status information of the pet to be inspected, the processor 602 is specifically configured to perform the following operations:
the method comprises the steps of obtaining real-time video stream data containing a pet to be detected, wherein the real-time video stream data are obtained in real time through a camera device after preset stimulation is carried out on the pet to be detected, and the real-time video stream data comprise a plurality of video frames;
determining at least one key frame in a plurality of video frames, wherein each key frame in the at least one key frame is a video frame of a pet to be detected which generates abnormal response to the stimulus;
extracting features of each key frame to obtain at least one second feature vector, wherein the at least one second feature vector corresponds to the at least one key frame one to one;
and establishing a topological relation graph according to the at least one second characteristic vector, and taking the topological relation graph as the state information of the pet to be detected.
In an embodiment of the present invention, in performing feature extraction on each key frame to obtain at least one second feature vector, the processor 602 is specifically configured to perform the following operations:
extracting each key frame and the first n-1 frames of video frames of each key frame to obtain n frames of video frames corresponding to each key frame, wherein n is an integer greater than or equal to 1;
performing feature extraction on n frames of video frames to obtain n third feature vectors, wherein the n third feature vectors correspond to the n frames of video frames one to one;
and splicing the n third feature vectors to obtain a second feature vector corresponding to each key frame.
In an embodiment of the present invention, in terms of performing feature extraction on the state information according to the brightness adjustment value of each pixel to obtain a first feature vector, the processor 602 is specifically configured to perform the following operations:
constructing a first adjacency matrix according to the topological relation graph;
adding features of the first adjacency matrix to obtain a second adjacency matrix;
determining a first degree matrix according to the second adjacency matrix;
and performing graph convolution processing according to the second adjacent matrix and the first degree matrix to obtain a first feature vector.
In an embodiment of the present invention, in terms of performing graph convolution processing according to the second adjacency matrix and the first degree matrix to obtain the first feature vector, the processor 602 is specifically configured to perform the following operations:
obtaining an inverse matrix of the first degree matrix, and performing evolution processing on the inverse matrix to obtain a first matrix;
inputting the learning matrix corresponding to the first matrix, the second adjacent matrix and the graph convolution processing into an activation function to obtain a first characteristic vector, wherein the activation function can be expressed by a formula:
Figure BDA0003441640990000151
where, ζ represents a sigmoid activation function,
Figure BDA0003441640990000152
a first degree matrix is represented that represents a first degree matrix,
Figure BDA0003441640990000153
denotes a second adjacency matrix, W denotesAnd (5) learning a matrix.
In an embodiment of the present invention, in determining candidate disease information corresponding to a pet to be detected according to the first feature vector and a medical record of the pet to be detected, the processor 602 is specifically configured to perform the following operations:
matching in the medical record of the pet to be detected according to the first feature vector to obtain at least one first candidate disease information and at least one first probability, wherein the at least one first candidate disease information corresponds to the at least one first probability one to one;
matching in a disease library according to the first feature vector to obtain at least one second candidate disease information and at least one second probability, wherein the at least one second candidate disease information and the at least one second probability are in one-to-one correspondence;
performing intersection processing on the at least one first candidate disease information and the at least one second candidate disease information to obtain at least one third candidate disease information;
multiplying a first probability value and a second probability value corresponding to each piece of third candidate disease information in the at least one piece of third candidate disease information to obtain at least one third probability value, wherein the at least one third probability value is in one-to-one correspondence with the at least one piece of third candidate disease information;
and determining candidate disease state information corresponding to the pet to be checked in the at least one third candidate disease state information according to the at least one third probability value, wherein the third probability value corresponding to the candidate disease state information corresponding to the pet to be checked is larger than or equal to the first threshold value.
In an embodiment of the present invention, in generating the examination flow information according to the candidate condition information and the examination habit information, the processor 602 is specifically configured to perform the following operations:
determining at least one examination item according to the candidate disease information;
and sequencing at least one inspection item according to the inspection habit information to obtain inspection flow information.
It should be understood that the medical record information-based light type adjusting and controlling device in the present application may include a smart Phone (e.g., an Android Phone, an iOS Phone, a Windows Phone, etc.), a tablet computer, a palm computer, a notebook computer, a Mobile Internet device MID (Mobile Internet Devices, MID for short), a robot, or a wearable device, etc. The above-mentioned light type regulation and control device based on medical record information is merely an example, and is not exhaustive, and includes but is not limited to the above-mentioned light type regulation and control device based on medical record information. In practical applications, the lighting type adjusting and controlling device based on medical record information may further include: intelligent vehicle-mounted terminal, computer equipment and the like.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present invention may be implemented by combining software and a hardware platform. Based on such understanding, all or part of the technical solutions of the present invention, which contribute to the background art, can be embodied in the form of a software product, which can be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for causing a computer device (which can be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present invention.
Therefore, the present application also provides a computer-readable storage medium, which stores a computer program, where the computer program is executed by a processor to implement part or all of the steps of any one of the methods for regulating and controlling a light type based on medical record information as described in the above method embodiments. For example, the storage medium may include a hard disk, a floppy disk, an optical disk, a magnetic tape, a magnetic disk, a flash memory, and the like.
Embodiments of the present application also provide a computer program product, which includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to execute part or all of the steps of any one of the medical record information-based light type adjustment and control methods described in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are all alternative embodiments and that the acts and modules referred to are not necessarily required by the application.
In the above embodiments, the description of each embodiment has its own emphasis, and for parts not described in detail in a certain embodiment, reference may be made to the description of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is merely a logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units may be integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated units, if implemented in the form of software program modules and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solutions of the present application, in essence or part of the technical solutions contributing to the prior art, or all or part of the technical solutions, can be embodied in the form of a software product, which is stored in a memory and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, and the memory may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the methods and their core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A lighting type regulation and control method based on medical record information is characterized by comprising the following steps:
acquiring state information of a pet to be checked, and performing feature extraction on the state information to obtain a first feature vector;
determining the medical record of the pet to be detected in a medical record library according to the pet information of the pet to be detected;
determining candidate disease information corresponding to the pet to be detected according to the first feature vector and the medical record of the pet to be detected;
determining an examining doctor of the pet to be examined according to the candidate disease information, and acquiring examination habit information of the examining doctor;
generating inspection flow information according to the candidate disease information and the inspection habit information;
and generating a light type regulation and control instruction according to the inspection flow information, and regulating and controlling the light type of the inspection lamp according to the light type regulation and control instruction.
2. The method of claim 1, wherein said obtaining status information of the pet to be inspected comprises:
acquiring real-time video stream data containing the pet to be detected, wherein the real-time video stream data is acquired in real time through a camera after preset stimulation is performed on the pet to be detected, and the real-time video stream data comprises a plurality of video frames;
determining at least one key frame in the plurality of video frames, wherein each key frame in the at least one key frame is a video frame of the pet to be checked which generates abnormal response to the stimulus;
extracting features of each key frame to obtain at least one second feature vector, wherein the at least one second feature vector corresponds to the at least one key frame one to one;
and establishing a topological relation graph according to the at least one second characteristic vector, and taking the topological relation graph as the state information of the pet to be detected.
3. The method according to claim 2, wherein said extracting features from each key frame to obtain at least one second feature vector comprises:
extracting each key frame and the first n-1 frames of video frames of each key frame to obtain n frames of video frames corresponding to each key frame, wherein n is an integer greater than or equal to 1;
performing feature extraction on the n frames of video frames to obtain n third feature vectors, wherein the n third feature vectors correspond to the n frames of video frames one to one;
and splicing the n third feature vectors to obtain a second feature vector corresponding to each key frame.
4. The method of claim 2, wherein the performing feature extraction on the state information to obtain a first feature vector comprises:
constructing a first adjacency matrix according to the topological relation graph;
adding features to the first adjacency matrix to obtain a second adjacency matrix;
determining a first degree matrix according to the second adjacent matrix;
and performing graph convolution processing according to the second adjacent matrix and the first degree matrix to obtain the first feature vector.
5. The method of claim 4, wherein the performing graph convolution processing according to the second adjacency matrix and the first degree matrix to obtain the first feature vector comprises:
obtaining an inverse matrix of the first degree matrix, and performing square root processing on the inverse matrix to obtain a first matrix;
inputting the first matrix, the second adjacent matrix and a learning matrix corresponding to the graph convolution processing into an activation function to obtain the first feature vector, wherein the activation function satisfies the following formula:
Figure FDA0003441640980000021
wherein,ζ denotes a sigmoid activation function,
Figure FDA0003441640980000022
a first degree matrix representing a first degree of a second degree of the first degree of the second degree of the first degree of the second degree of the third degree of the second degree of the third degree of the fourth degree,
Figure FDA0003441640980000023
represents a second adjacency matrix and W represents the learning matrix.
6. The method according to claim 1, wherein the determining the candidate disease information corresponding to the pet to be inspected according to the first feature vector and the medical record of the pet to be inspected comprises:
matching in the medical record of the pet to be inspected according to the first feature vector to obtain at least one first candidate disease information and at least one first probability, wherein the at least one first candidate disease information is in one-to-one correspondence with the at least one first probability;
matching in the disease library according to the first feature vector to obtain at least one second candidate disease information and at least one second probability, wherein the at least one second candidate disease information and the at least one second probability are in one-to-one correspondence;
performing intersection processing on the at least one first candidate disease condition information and the at least one second candidate disease condition information to obtain at least one third candidate disease condition information;
multiplying a first probability value and a second probability value corresponding to each of the at least one third candidate disease information to obtain at least one third probability value, wherein the at least one third probability value and the at least one third candidate disease information are in one-to-one correspondence;
and determining candidate disease state information corresponding to the pet to be checked in the at least one third candidate disease state information according to the at least one third probability value, wherein the third probability value corresponding to the candidate disease state information corresponding to the pet to be checked is larger than or equal to a first threshold value.
7. The method of claim 1, wherein generating examination flow information from the candidate condition information and the examination habit information comprises:
determining at least one examination item according to the candidate disease information;
and sequencing the at least one inspection item according to the inspection habit information to obtain the inspection flow information.
8. The utility model provides a light type regulation and control device based on case history information which characterized in that, the device includes:
the system comprises an extraction module, a detection module and a comparison module, wherein the extraction module is used for acquiring state information of a pet to be detected and extracting features of the state information to obtain a first feature vector;
the matching module is used for determining the medical record of the pet to be detected in a medical record library according to the pet information of the pet to be detected and determining candidate disease information corresponding to the pet to be detected according to the first characteristic vector and the medical record of the pet to be detected;
the processing module is used for determining an examining doctor of the pet to be examined according to the candidate disease information, acquiring examining habit information of the examining doctor and generating examining flow information according to the candidate disease information and the examining habit information;
and the regulation and control module is used for generating a light type regulation and control instruction according to the inspection flow information and regulating and controlling the light type of the inspection lamp according to the light type regulation and control instruction.
9. An electronic device comprising a processor, a memory, a communication interface, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the processor, the one or more programs including instructions for performing the steps in the method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which is executed by a processor to implement the method according to any one of claims 1-7.
CN202111636583.3A 2021-12-29 2021-12-29 Light type regulation and control method and device based on medical record information and electronic equipment Active CN114449701B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111636583.3A CN114449701B (en) 2021-12-29 2021-12-29 Light type regulation and control method and device based on medical record information and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111636583.3A CN114449701B (en) 2021-12-29 2021-12-29 Light type regulation and control method and device based on medical record information and electronic equipment

Publications (2)

Publication Number Publication Date
CN114449701A true CN114449701A (en) 2022-05-06
CN114449701B CN114449701B (en) 2023-08-08

Family

ID=81365311

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111636583.3A Active CN114449701B (en) 2021-12-29 2021-12-29 Light type regulation and control method and device based on medical record information and electronic equipment

Country Status (1)

Country Link
CN (1) CN114449701B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080285070A1 (en) * 2007-05-15 2008-11-20 Fuji Xerox Co., Ltd. Electronic writing instrument, computer system, electronic writing method and computer readable medium
US20080319305A1 (en) * 2007-05-22 2008-12-25 Diana Martin Method for determining at least one pet parameter
CN105793852A (en) * 2013-12-04 2016-07-20 M·奥利尼克 Medical treatment computer planning method and system with mass medical analysis
US20180218793A1 (en) * 2017-01-27 2018-08-02 Michael Edwin Hebrard Physician examination scheduling system and processes to self-report symptoms for an examination
CN111310596A (en) * 2020-01-20 2020-06-19 北京海益同展信息科技有限公司 Animal diseased state monitoring system and method
US10687516B1 (en) * 2019-09-10 2020-06-23 Jacobus Sarel Van Eeden Methods and systems for facilitating the management of data associated with a pet
US20200302180A1 (en) * 2018-03-13 2020-09-24 Tencent Technology (Shenzhen) Company Limited Image recognition method and apparatus, terminal, and storage medium
CN111708940A (en) * 2020-05-29 2020-09-25 北京百度网讯科技有限公司 Question processing method and device, electronic equipment and storage medium
US20200375526A1 (en) * 2019-05-31 2020-12-03 Samsung Electronics Co., Ltd. Electronic device for controlling skin-care device and method of operating the same
CN112639990A (en) * 2018-04-30 2021-04-09 小利兰·斯坦福大学托管委员会 System and method for maintaining health using personal digital phenotype
CN112700826A (en) * 2020-12-30 2021-04-23 杭州依图医疗技术有限公司 Medical data processing method and device and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080285070A1 (en) * 2007-05-15 2008-11-20 Fuji Xerox Co., Ltd. Electronic writing instrument, computer system, electronic writing method and computer readable medium
US20080319305A1 (en) * 2007-05-22 2008-12-25 Diana Martin Method for determining at least one pet parameter
CN105793852A (en) * 2013-12-04 2016-07-20 M·奥利尼克 Medical treatment computer planning method and system with mass medical analysis
US20180218793A1 (en) * 2017-01-27 2018-08-02 Michael Edwin Hebrard Physician examination scheduling system and processes to self-report symptoms for an examination
US20200302180A1 (en) * 2018-03-13 2020-09-24 Tencent Technology (Shenzhen) Company Limited Image recognition method and apparatus, terminal, and storage medium
CN112639990A (en) * 2018-04-30 2021-04-09 小利兰·斯坦福大学托管委员会 System and method for maintaining health using personal digital phenotype
US20200375526A1 (en) * 2019-05-31 2020-12-03 Samsung Electronics Co., Ltd. Electronic device for controlling skin-care device and method of operating the same
US10687516B1 (en) * 2019-09-10 2020-06-23 Jacobus Sarel Van Eeden Methods and systems for facilitating the management of data associated with a pet
CN111310596A (en) * 2020-01-20 2020-06-19 北京海益同展信息科技有限公司 Animal diseased state monitoring system and method
CN111708940A (en) * 2020-05-29 2020-09-25 北京百度网讯科技有限公司 Question processing method and device, electronic equipment and storage medium
CN112700826A (en) * 2020-12-30 2021-04-23 杭州依图医疗技术有限公司 Medical data processing method and device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨金保;: "动物医院门诊诊断基本流程简述――以山东畜牧兽医职业学院动物医院为例", 畜牧兽医科技信息, no. 01 *

Also Published As

Publication number Publication date
CN114449701B (en) 2023-08-08

Similar Documents

Publication Publication Date Title
CN111008640B (en) Image recognition model training and image recognition method, device, terminal and medium
US20210034813A1 (en) Neural network model with evidence extraction
US10874355B2 (en) Methods and apparatus to determine developmental progress with artificial intelligence and user input
WO2022213465A1 (en) Neural network-based image recognition method and apparatus, electronic device, and medium
CN112183577A (en) Training method of semi-supervised learning model, image processing method and equipment
KR20210073569A (en) Method, apparatus, device and storage medium for training image semantic segmentation network
US11152119B2 (en) Care path analysis and management platform
CN111368672A (en) Construction method and device for genetic disease facial recognition model
CN109522970B (en) Image classification method, device and system
CN112395979A (en) Image-based health state identification method, device, equipment and storage medium
CN113707323B (en) Disease prediction method, device, equipment and medium based on machine learning
WO2022111387A1 (en) Data processing method and related apparatus
WO2023050143A1 (en) Recommendation model training method and apparatus
WO2024002167A1 (en) Operation prediction method and related apparatus
CN112016617B (en) Fine granularity classification method, apparatus and computer readable storage medium
CN113283368A (en) Model training method, face attribute analysis method, device and medium
CN114022841A (en) Personnel monitoring and identifying method and device, electronic equipment and readable storage medium
CN109117800A (en) Face gender identification method and system based on convolutional neural networks
CN114449701A (en) Medical record information-based light type regulation and control method and device and electronic equipment
Lau et al. Tree structure convolutional neural networks for gait-based gender and age classification
CN111339952A (en) Image classification method and device based on artificial intelligence and electronic equipment
CN116959099A (en) Abnormal behavior identification method based on space-time diagram convolutional neural network
Malakar et al. Detection of face mask in real-time using convolutional neural networks and open-CV
CN116150690A (en) DRGs decision tree construction method and device, electronic equipment and storage medium
CN116362301A (en) Model quantization method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant