Nothing Special   »   [go: up one dir, main page]

CN116389761B - Clinical simulation teaching data management system of nursing - Google Patents

Clinical simulation teaching data management system of nursing Download PDF

Info

Publication number
CN116389761B
CN116389761B CN202310537484.2A CN202310537484A CN116389761B CN 116389761 B CN116389761 B CN 116389761B CN 202310537484 A CN202310537484 A CN 202310537484A CN 116389761 B CN116389761 B CN 116389761B
Authority
CN
China
Prior art keywords
frame
centrality
video
key
key points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310537484.2A
Other languages
Chinese (zh)
Other versions
CN116389761A (en
Inventor
夏正新
竺波
李芳�
汪洋
苏翀
张振涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202310537484.2A priority Critical patent/CN116389761B/en
Publication of CN116389761A publication Critical patent/CN116389761A/en
Application granted granted Critical
Publication of CN116389761B publication Critical patent/CN116389761B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/02Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/49Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Strategic Management (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Computational Linguistics (AREA)
  • Image Analysis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The invention relates to the field of video coding compression, in particular to a nursing clinical simulation teaching data management system, which comprises: the method comprises the steps of performing key point matching on adjacent video frames of clinical simulation teaching videos, obtaining a key chain, obtaining centrality of the key points according to distribution conditions of the key points on each video frame, obtaining a centrality chain, obtaining segmented frames through the centrality chain, obtaining probability that each segmented frame is a main frame through analysis of distribution conditions and aggregation of the key points in the segmented frames, screening out an attention image by combining gray level change values of the segmented frames, and obtaining the main area of each video frame through the main area and the key chain in the attention image. The video frame is compressed according to the body region in the video frame. The invention has high compression efficiency and can highlight the main body and improve the teaching quality.

Description

Clinical simulation teaching data management system of nursing
Technical Field
The invention relates to the field of video coding compression, in particular to a nursing clinical simulation teaching data management system.
Background
In the simulation environment, each simulation entity sends data information such as simulation position, direction and the like of the simulation entity to an image generator, scene video data is rendered from a specific angle and is transmitted to a user interface for display, and the image generator receives all data from the simulation entity and shared data in the whole network. The existing method is used for transmitting all data in the simulation video data, and the existing method is used for carrying out lossless compression on all data due to the fact that the data size is large, so that the compression rate is small, the transmission is slow, and meanwhile, blocking is easy when a user interface is displayed.
The invention provides a nursing clinical simulation teaching data management system, which obtains a main body area of each video frame by acquiring a concerned image in a simulation teaching video, compresses non-main body areas in different video frames to different degrees, reduces data volume on one hand, highlights a main body due to the visual effect of human eyes on the other hand, and improves teaching quality.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention aims to provide a nursing clinical simulation teaching data management system.
In order to achieve the above purpose, the present invention provides the following technical solutions: a forensic clinical simulation teaching data management system, the system comprising:
the simulation video acquisition module acquires clinical simulation teaching videos;
the simulation video matching module is used for carrying out SIFT matching on all adjacent video frames in the simulation teaching video, obtaining key points in each video frame and matching relations of the key points between the adjacent video frames, and obtaining a key chain according to the matching relations of the key points between the adjacent video frames; acquiring the centrality of each key point according to the coordinates of the central pixel point of the video frame; taking a sequence formed by centrality of all key points on each key chain as a centrality chain; obtaining a segmentation frame according to the centrality chain;
the main body region acquisition module is used for dividing all key points into a plurality of attribute categories according to the centrality of all key points on the segmentation frame; performing density clustering on all key points contained in one attribute category to obtain a plurality of space categories; acquiring the probability of the divided frame as a main frame according to key points of a plurality of space categories in each attribute category in the divided frame; acquiring main body key chains according to the attribute types of the divided frames, and acquiring gray level change values of the divided frames according to gray level values of all key points on each main body key chain of the divided frames; multiplying the normalized gray level change value of each divided frame by the probability that each divided frame is a main frame to obtain the attention degree of each divided frame, and acquiring an attention image according to the attention degree; dividing all video frames in the clinical simulation teaching video into a plurality of image groups by taking each concerned image as a dividing point, and acquiring a main body region of each video frame in the image groups according to main body key points in the concerned images in the image groups;
the simulation video compression module is used for acquiring the size of a filtering window according to the centrality of all key points in a non-main-body area of the video frame, filtering the non-main-body area of the video frame according to the size of the filtering window, and compressing the filtered video frame to obtain compressed data;
and the simulation video management module is used for transmitting and decompressing the compressed data.
Preferably, the step of obtaining the key chain according to the matching relationship of the key points between the adjacent video frames includes the following steps:
dividing two matched key points in all adjacent video frames of the simulation teaching video into a category to obtain a plurality of categories, and forming a key chain by all the key points in each category according to the sequence of the video frames.
Preferably, the step of obtaining the centrality of each key point according to the coordinates of the central pixel point of the video frame includes the steps of:
acquiring coordinates of a central pixel point of a video frame, and marking the coordinates asTaking any one key point on the video frame as a target key point, and marking the sitting of the target key point in the video frame as +.>Obtaining the centrality of the target key points:
wherein the method comprises the steps ofFor the target key point->Center to center of (3); />The abscissa of the center pixel point of the video frame; />Is the ordinate of the center pixel point of the video frame; />The abscissa of the target key point in the video frame; />The ordinate of the target key point in the video frame; />Standard deviation of the abscissa of all pixel points in the video frame; />Standard deviation of the ordinate of all pixel points in the video frame; />Is the circumference ratio; />Is an exponential function with a base of natural constant.
Preferably, the step of acquiring the split frame according to the centrality chain includes the steps of:
and taking the video frame in which the key point corresponding to the centrality larger than the preset first threshold value in each centrality chain is positioned as a segmentation frame.
Preferably, the step of classifying all the keypoints into a plurality of attribute categories according to the centrality of all the keypoints on the segmented frame includes the steps of:
constructing a centrality histogram according to centrality of all key points in the segmentation frame, wherein an abscissa in the centrality histogram is the centrality size, and an ordinate is the number of key points corresponding to each centrality; performing otsu multi-threshold segmentation on the centrality histogram, dividing the centrality into a plurality of categories, and forming a category by all corresponding key points in each category as attribute categories.
Preferably, the obtaining the probability of the segmented frame as the main frame according to the key points of the plurality of spatial categories in each attribute category in the segmented frame includes the steps of: taking any one of the divided frames as a target divided frame, acquiring the central mean value of all key points in each attribute category in the target divided frame as the central mean value of each attribute category, taking the largest central mean value in the central mean values of all attribute categories as the largest central mean value of the target divided frame, taking the next largest central mean value as the next largest central mean value of the target divided frame, wherein the probability that the target divided frame is a main frame is as follows:
wherein,,dividing the frame into main frames for the target; />Dividing the number of attribute categories in the frame for the target; />Dividing the number of space categories in the ith attribute category in the frame for the target; />The number of key points in the jth space category in the ith attribute category in the target segmentation frame;/>the number of key points contained in the convex hull areas of all the key points in the jth space category in the ith attribute category in the target segmentation frame; />Dividing a maximum centrality mean value of a frame for a target; />The next largest centrality mean of the frames is segmented for the target.
Preferably, the step of obtaining the main key chain according to the attribute type of the split frame includes the following steps:
and acquiring a key chain to which each key point in the attribute type corresponding to the maximum centrality mean value of the divided frames belongs as a main key chain.
Preferably, the step of obtaining the gray scale variation value of the divided frame according to the gray scale values of all key points on each main key chain of the divided frame includes the steps of:
and acquiring the variance of the gray values of all key points on each main body key chain, taking the variance as the gray variance of each main body key chain, and taking the average value of the gray variances of all main body key chains of the divided frames as the gray variation value of the divided frames.
Preferably, the step of acquiring the main body region of each video frame in the image group according to the main body key point in the attention image in the image group includes the steps of:
and taking all key points in the attribute type corresponding to the maximum centrality mean value in the concerned image in each image group as main body key points in the concerned image, taking key points on the same key chain with the main body key points in the rest video frames in the image group as main body key points of each video frame, and taking a convex hull region formed by all the main body key points of each video frame in the image group as a main body region of each video frame.
Preferably, the step of obtaining the filter window size according to the centrality of all key points in the non-main area of the video frame includes the steps of:
taking the average value of centrality of all key points in a non-main area of the video frame as the centrality of the non-main area of the video frame, rounding and rounding the inverse of the centrality of the non-main area to obtain the filter window size of the non-main area.
Compared with the prior art, the invention has the following beneficial effects:
according to the invention, key point matching is carried out on adjacent video frames of clinical simulation teaching video, so that a key chain is obtained, the centrality of the key points is obtained according to the distribution condition of the key points on each video frame, so that a centrality chain is obtained, video frames with main areas possibly positioned at the centers of the video frames are obtained through the centrality chain to serve as divided frames, the probability that each divided frame is the main frame is obtained through analyzing the distribution condition and the aggregation of the key points in the divided frames, the video frames with the main areas positioned at the centers of the video frames are screened out by combining the gray level change values of the divided frames to serve as attention images, and the main areas of each video frame are obtained through the main areas and the key chain in the attention images. The subject area is the most important area in the video frame, and the subject area is not filtered by the invention, so that any details of the subject area can be maintained after the compressed data is decompressed later. According to the invention, the attention characteristic of the human eyes to the center of the video frame is considered, the non-main body region is filtered according to the centrality of the non-main body region, so that the non-main body region which is not concerned by the human eyes can be ensured to be smoothly compressed to the greatest extent, the main body region is highlighted, and the teaching quality is improved. For the video frame after mean filtering smoothing, the pixel values of the pixel points of the smooth area in the video frame tend to be consistent, the compression rate can be greatly improved by ZIP compression, and the transmission efficiency of the clinical simulation teaching video is ensured.
Drawings
Fig. 1 is a system block diagram of a system for managing clinical simulation teaching data of a nursing care according to an embodiment of the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following detailed description refers to specific implementation, structure, characteristics and effects of a nursing clinical simulation teaching data management system according to the invention with reference to the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the nursing clinical simulation teaching data management system provided by the invention with reference to the accompanying drawings.
Referring to fig. 1, a system for managing clinical simulation teaching data of a nursing care according to an embodiment of the present invention is shown, the system includes the following modules:
and the simulation video acquisition module S101 is used for acquiring clinical simulation teaching videos.
In the simulation environment, a plurality of simulation entities are utilized to send data information such as simulation positions, directions and the like of the simulation entities to an image generator, and the image generator renders clinical simulation teaching videos from a specific angle.
It should be noted that, the image generator needs to transmit the clinical simulation teaching video to the local storage server, and when the user needs to watch the clinical simulation teaching video, the local storage server sends the clinical simulation teaching video to the user side and displays the clinical simulation teaching video on the user interface. In the transmission process of the clinical simulation teaching video, in order to ensure the transmission efficiency, the clinical simulation teaching video needs to be compressed and transmitted.
And the simulation video matching module S102 is used for matching the video frames to obtain a key chain, a central chain and a segmentation frame of the clinical simulation teaching video.
It should be noted that, because there is a change in the subject or a change in the position of the subject in the simulation teaching video, for example: in the surgical teaching process, a surgical knife is used as a main body, at the beginning of the surgical teaching, firstly, the surgical background is taught, at this time, a clinical simulation teaching video is panoramic, and along with the beginning of the surgical teaching, the clinical simulation teaching video can be focused on a specific surgical knife operation, and meanwhile, in order to highlight the main body, the main body is often positioned in the middle of a video frame. Therefore, in order to ensure the definition of the main body when the clinical simulation teaching video is played through the user interface, the background part in each video frame can be properly blurred, so that the data volume is reduced on one hand, the main body is highlighted under the visual effect of human eyes on the other hand, and the teaching quality is improved. Therefore, the corresponding relation between the pixel points in the adjacent video frames needs to be obtained first, the SIFT descriptor is a matching operator with stronger robustness, and the corresponding relation between the pixel points in the video images of the adjacent frames is obtained by adopting the operator.
In the embodiment of the invention, SIFT matching is carried out on all adjacent video frames in the simulation teaching video, and the matching relation of key points in each video frame and key points between adjacent video frames of the simulation teaching video is obtained. The two matched key points in all adjacent video frames of the simulation teaching video are divided into one category, so that the key points in all video frames of the simulation teaching video can be divided into a plurality of categories, and all the key points in each category form a key chain according to the sequence of the video frames. For example, keypoints in the first video frameAnd the key point in the second video frame +.>Matching, key point +.>And the key point in the third video frame +.>Matching, key point +.>No matching object in the fourth video frame, then key point +.>Is a category, its constitution key chain is +.>
It should be noted that, since the main body is often located in the middle of the video frames, centrality of the key points can be obtained according to the position of each key point in each video frame. The center of the key point is larger when the key point is closer to the center of the video frame, and the center of the key point is smaller when the key point is farther from the center of the video frame, so that the center distribution characteristic is consistent with the distribution characteristic of a two-dimensional Gaussian distribution function taking the center of the video frame as the center, and the center of each key point in each video frame is acquired according to the two-dimensional Gaussian distribution function.
In the embodiment of the invention, for any video frame, the coordinates of the central pixel point of the video frame are firstly obtained and recorded asTaking any key point on the video frame as a target key point, and marking the sitting of the target key point in the video frame as +.>Center of target key point +.>The method comprises the following steps:
wherein the method comprises the steps ofFor the target key point->Center to center of (3); />The abscissa of the center pixel point of the video frame; />Is the ordinate of the center pixel point of the video frame; />The abscissa of the target key point in the video frame; />The ordinate of the target key point in the video frame; />Standard deviation of the abscissa of all pixel points in the video frame; />Standard deviation of the ordinate of all pixel points in the video frame; />Is the circumference ratio; />Is an exponential function with a base of natural constant.
Similarly, centrality of each key point in each video frame is obtained. Note that the centrality of the keypoints at the same position in different video frames is the same.
The sequence in which the centrality of all the keypoints on each key chain is composed in the order of the keypoints on the key chain is denoted as centrality chain.
It should be noted that, in the clinical simulation teaching video, in order to highlight the main body, the main body is often closer to the central area of the video frame, but because the main body is continuously moving, in part of the video frames, the main body may not be in the central area of the video frame, and when the centrality of the key point is larger, the more likely is the pixel point of the main body, so the centrality chain can be used for dividing the video frame of the clinical simulation teaching video, and obtaining the video frame of the main body positioned in the center of the video frame.
In the embodiment of the invention, a numerical value is presetMarking as a preset first threshold value, and adding more than the preset first threshold value to each central chain>The video frame where the key point corresponding to the centrality of (c) is located is taken as the divided frame. In the embodiment of the inventionIn other embodiments, the practitioner may set the preset first threshold according to the actual implementation.
Thus, a key chain, a center chain, and a split frame are acquired.
The main area acquisition module S103 acquires the probability of the divided frame being the main frame, acquires the image of interest according to the probability of the divided frame being the main frame, and acquires the main area according to the image of interest.
It should be noted that, the key points in the video frame obtained by using the SIFT algorithm may be key features in the video frame, but may also be noise, when the centrality of the key points is larger, the key points may be pixels of the main body portion, but may also be noise, and for the key points of the main body portion, the key points of the main body portion have strong aggregation, and the centrality change trend in different video frames is the same. The randomness of the noise is large, the aggregation is not strong, and the aggregation probability in different video frames is very small. The centrality of part of key points in the divided frames is larger, so that the video frames with the main bodies positioned in the centers of the video frames can be acquired according to the distribution and aggregation of the key points in the divided frames, so that the main bodies of all the video frames can be acquired according to the video frames with the main bodies positioned in the centers of the video frames, and the main bodies and the non-main body parts are compressed to different degrees.
In the embodiment of the invention, for any one divided frame, the centrality of all key points in the divided frame is acquired, a centrality histogram is constructed, the abscissa in the centrality histogram is the centrality size, and the ordinate is the number of key points corresponding to each centrality. Performing otsu multi-threshold segmentation on the centrality histogram, dividing the centrality into a plurality of categories, and forming a category by all corresponding key points in each category as attribute categories, so that a plurality of attribute categories can be obtained. The centrality of all key points in the same attribute category is similar, and the centrality of all key points in different attribute categories is quite different.
And carrying out density clustering on all key points contained in one attribute category to obtain a plurality of clusters, and taking each cluster of the attribute category as a space category. In the embodiment of the invention, the density clustering method is DBSCAN density clustering.
It should be noted that, for each attribute type, when the clustering clusters obtained by density clustering are smaller, that is, the number of space types is smaller (at least 1), the ratio of the number of key points in each space type to the number of key points contained in the minimum circumscribed graph of the space type is larger, the main body of the video frame where the attribute type is located is more likely to be located in the center of the video frame. Meanwhile, in the clinical simulation teaching video, the main body is changed, when one target in the clinical simulation teaching video advances along the direction of increasing centrality, namely, the target gradually approaches the center of the video frame, the target is the main body, and other targets are necessarily far away from the region of increasing centrality, so that when the difference between the attribute type with the largest centrality average value in the video frame and the centrality average value of other attribute types is larger, the probability that the main body of the video frame is positioned in the center of the video frame is larger.
In the embodiment of the present invention, a video frame in which a subject is located in the center of the video frame is referred to as a subject frame. Taking any one of the divided frames as a target divided frame, acquiring the mean value of centrality of all key points in each attribute category in the target divided frame as the mean value of centrality of each attribute category, and taking the mean value of centrality of all key points in each attribute category as the mean value of centrality of each attribute categoryThe maximum centrality mean value is taken as the maximum centrality mean value of the target divided frames, the next largest centrality mean value is taken as the next largest centrality mean value of the target divided frames, and the probability that the target divided frames are main frames is calculatedThe method comprises the following steps:
wherein,,dividing the frame into main frames for the target; />Dividing the number of attribute categories in the frame for the target; />Dividing the number of space categories in the ith attribute category in the frame for the target; />The number of key points in the jth space category in the ith attribute category in the target segmentation frame; />The number of key points contained in the convex hull areas of all the key points in the jth space category in the ith attribute category in the target segmentation frame; />Dividing a maximum centrality mean value of a frame for a target; />Dividing a next-largest centrality mean of the frames for the target; when the number of the space categories in each attribute category in the target segmentation frame is larger, the distribution of the key points in the attribute category is more discrete, and the probability that the target segmentation frame is a main body frame is smaller; when the target is divided into the jth space category in the ith attribute category in the frameThe number of key points is a proportion of the number of all key points in the minimum circumscribing image (i.e. convex hull area) of the key points in the space class>The larger the concentration of keypoints in the spatial category, the greater the concentration of keypoints in the spatial category,the average value of the concentration degree of the key points in all the spatial categories in the ith attribute category is the larger the value, the larger the probability that the target segmentation frame is a main frame is; when the ratio of the next-largest centrality average value of the target segmentation frame to the largest centrality average value of the target segmentation frame is smaller, the larger the difference between the largest centrality average value and the next-largest centrality average value of the target segmentation frame is, the larger the probability that the target segmentation frame is a main frame is.
Similarly, the probability of each divided frame being a subject frame is obtained.
It should be noted that, when the gray value of all the key points on the key chain is changed more greatly, the environmental characteristics of the targets corresponding to the key points in the clinical simulation teaching video are changed more greatly, for example, the targets move under the operating lights. In this case, greater attention needs to be paid.
In the embodiment of the invention, the key chain of each key point in the attribute type corresponding to the maximum centrality mean value of one divided frame is obtained and used as a main key chain respectively. And acquiring the variance of the gray values of all key points on each main body key chain, taking the variance as the gray variance of each main body key chain, and taking the average value of the gray variances of all main body key chains of the divided frames as the gray variation value of the divided frames.
And similarly, acquiring the gray level change value of each divided frame, normalizing the gray level change values of all the divided frames, and multiplying the normalized gray level change value of each divided frame by the probability that each divided frame is a main frame to obtain the attention degree of each divided frame. The greater the gray level variation value of a divided frame and the greater the probability of being a subject frame, the greater the degree of attention of the divided frame.
Presetting a number valueMarking as a preset second threshold value, and setting the attention degree to be greater than the preset second threshold value +.>As the image of interest. In the embodiment of the invention +.>In other embodiments, the practitioner may set the preset second threshold according to the actual implementation.
The subject in the image of interest is located at the center of the image of interest, and the image of interest may be used as a reference image of a video frame preceding the image of interest to obtain the subject region in the video frame preceding the image of interest.
In the embodiment of the invention, all video frames in the clinical simulation teaching video are grouped by taking each concerned image as a segmentation point: all video frames before a first attention image in the clinical simulation teaching video and the attention image are taken as an image group, video frames between the first attention image and a second attention image in the clinical simulation teaching video and the second attention image are taken as an image group, video frames between the second attention image and a third attention image in the clinical simulation teaching video and the third attention image are taken as an image group, and the like, so that all the image groups are obtained.
And taking all key points in the attribute type corresponding to the maximum centrality mean value in the concerned image in each image group as main body key points in the concerned image, and taking the key points on the same key chain with the main body key points in the rest video frames in the image group as main body key points of each video frame.
The convex hull region formed by all the main body key points of each video frame (including the attention image) in the image group is taken as the main body region of each video frame.
Thus, the main body area in each video frame in the clinical simulation teaching video is obtained.
And the simulation video compression module S104 compresses clinical simulation teaching videos.
The more the human eye focuses on the center region of the image, the more the main region on the video frame is located at the center of the video frame, the more the region outside the main region is located away from the center of the video frame, the more the human eye focuses on the main region, and the more the region outside the main region is not focused on, the more the region outside the main region on the video frame can be compressed, so that the compression rate is improved, and the main region is highlighted. The more the main area is located away from the center of the video frame, the less the area outside the main area is located at the center of the video frame, and the less the area outside the main area is required to be compressed in order to secure a visual effect. Therefore, the embodiment of the invention compresses the area outside the main area according to the centrality of the area outside the main area on each video frame.
In the embodiment of the present invention, an area other than the main area of each video frame is referred to as a non-main area. Taking any video frame as a target video frame, acquiring the average value of centrality of all key points in a non-main-body area of the target video frame as the centrality of the non-main-body area of the target video frame, and usingTo indicate, will->Filter window size as non-body region of target video frame, wherein +.>Rounding the whole symbol. Adopts->The filtering window with the size carries out mean filtering on the non-main body area of the target video frame, does not carry out any processing on the main body area, and carries out ZIP compression on the filtered target video frame.
Similarly, the size of a filtering window of the non-main area of each video frame is obtained according to the centrality of the non-main area of each video frame, mean filtering is carried out on the non-main area of each video frame, and ZIP compression is carried out on each video frame after filtering. And taking the compression results of all video frames of the clinical simulation teaching video as compression data.
Thus, the compression of the clinical simulation teaching video is completed, and compressed data is obtained.
It should be noted that, the main area is the most important area in the video frame, and the embodiment of the present invention does not filter the main area, so that any details of the main area can be still maintained after the compressed data is decompressed subsequently. According to the embodiment of the invention, the attention characteristic of the human eyes to the center of the video frame is considered, the non-main body region is filtered according to the centrality of the non-main body region, and the non-main body region which is not attention by the human eyes is ensured to be smoothly compressed to the greatest extent, so that the main body region is highlighted, and the teaching quality is improved. For the video frame after the mean filtering smoothing, the pixel values of the pixel points of the smoothing area in the video frame tend to be consistent, and the compression rate can be improved to a greater extent by utilizing ZIP compression.
And the simulation video management module S105 is used for transmitting and decompressing clinical simulation teaching videos.
And transmitting the compressed data of the image generator to a local storage server, transmitting the compressed data to a user side by the local server when a user needs to view the clinical simulation teaching video, decompressing the compressed data by using a ZIP compression method to obtain the clinical simulation teaching video, and displaying the clinical simulation teaching video on a user side interface.
Thus, the transmission and decompression of the clinical simulation teaching video are completed.
In summary, the system of the invention includes a simulation video acquisition module, a simulation video matching module, a main body region acquisition module, a simulation video compression module, and a simulation video management module, and in the embodiment of the invention, the adjacent video frames of the clinical simulation teaching video are subjected to key point matching, so as to acquire a key chain, the centrality of the key points is acquired according to the distribution condition of the key points on each video frame, so as to obtain a centrality chain, the video frames with main body regions possibly located in the center of the video frames are acquired through the centrality chain as divided frames, the probability that each divided frame is the main body frame is acquired through analyzing the distribution condition and the aggregation of the key points in the divided frames, the video frames with main body regions located in the center of the video frames are screened out as attention images in combination with the gray level change values of the divided frames, and the main body regions of each video frame are acquired through the attention images and the key chain. The main area is the most important area in the video frame, and the main area is not filtered in the embodiment of the invention, so that any details of the main area can be still reserved after the compressed data is decompressed later. According to the embodiment of the invention, the attention characteristic of the human eyes to the center of the video frame is considered, the non-main body region is filtered according to the centrality of the non-main body region, and the non-main body region which is not attention by the human eyes is ensured to be smoothly compressed to the greatest extent, so that the main body region is highlighted, and the teaching quality is improved. For the video frame after mean filtering smoothing, the pixel values of the pixel points of the smooth area in the video frame tend to be consistent, the compression rate can be greatly improved by ZIP compression, and the transmission efficiency of the clinical simulation teaching video is ensured.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.

Claims (8)

1. A forensic clinical simulation teaching data management system, the system comprising:
the simulation video acquisition module acquires clinical simulation teaching videos;
the simulation video matching module is used for carrying out SIFT matching on all adjacent video frames in the simulation teaching video, obtaining key points in each video frame and matching relations of the key points between the adjacent video frames, and obtaining a key chain according to the matching relations of the key points between the adjacent video frames; acquiring the centrality of each key point according to the coordinates of the central pixel point of the video frame; taking a sequence formed by centrality of all key points on each key chain as a centrality chain; obtaining a segmentation frame according to the centrality chain;
the main body region acquisition module is used for dividing all key points into a plurality of attribute categories according to the centrality of all key points on the segmentation frame; performing density clustering on all key points contained in one attribute category to obtain a plurality of space categories; acquiring the probability of the divided frame as a main frame according to key points of a plurality of space categories in each attribute category in the divided frame; acquiring main body key chains according to the attribute types of the divided frames, and acquiring gray level change values of the divided frames according to gray level values of all key points on each main body key chain of the divided frames; multiplying the normalized gray level change value of each divided frame by the probability that each divided frame is a main frame to obtain the attention degree of each divided frame, and acquiring an attention image according to the attention degree; dividing all video frames in the clinical simulation teaching video into a plurality of image groups by taking each concerned image as a dividing point, and acquiring a main body region of each video frame in the image groups according to main body key points in the concerned images in the image groups;
the simulation video compression module is used for acquiring the size of a filtering window according to the centrality of all key points in a non-main-body area of the video frame, filtering the non-main-body area of the video frame according to the size of the filtering window, and compressing the filtered video frame to obtain compressed data;
the simulation video management module is used for transmitting and decompressing the compressed data;
the step of obtaining the centrality of each key point according to the coordinates of the central pixel point of the video frame comprises the following steps:
acquiring coordinates of a central pixel point of a video frame, and marking the coordinates asTaking any one key point on the video frame as a target key point, and marking the sitting of the target key point in the video frame as +.>Obtaining the centrality of the target key points:
wherein the method comprises the steps ofFor the target key point->Center to center of (3); />The abscissa of the center pixel point of the video frame; />Is the ordinate of the center pixel point of the video frame; />The abscissa of the target key point in the video frame; />The ordinate of the target key point in the video frame; />Standard deviation of the abscissa of all pixel points in the video frame; />Standard deviation of the ordinate of all pixel points in the video frame; />Is the circumference ratio; />Is an exponential function with a natural constant as a base;
the method for obtaining the probability of the segmented frame as the main frame according to the key points of a plurality of space categories in each attribute category in the segmented frame comprises the following steps:
taking any one of the divided frames as a target divided frame, acquiring the central mean value of all key points in each attribute category in the target divided frame as the central mean value of each attribute category, taking the largest central mean value in the central mean values of all attribute categories as the largest central mean value of the target divided frame, taking the next largest central mean value as the next largest central mean value of the target divided frame, wherein the probability that the target divided frame is a main frame is as follows:
wherein p is the probability that the target segmentation frame is the subject frame;dividing the number of attribute categories in the frame for the target; />Dividing the number of space categories in the ith attribute category in the frame for the target; />The number of key points in the jth space category in the ith attribute category in the target segmentation frame; />The number of key points contained in the convex hull areas of all the key points in the jth space category in the ith attribute category in the target segmentation frame; />Dividing a maximum centrality mean value of a frame for a target; />The next largest centrality mean of the frames is segmented for the target.
2. The system for managing clinical simulation teaching data of nursing according to claim 1, wherein the step of obtaining a key chain according to the matching relation of key points between adjacent video frames comprises the steps of:
dividing two matched key points in all adjacent video frames of the simulation teaching video into a category to obtain a plurality of categories, and forming a key chain by all the key points in each category according to the sequence of the video frames.
3. The system for managing clinical simulation teaching data of nursing according to claim 1, wherein the step of acquiring the divided frames from the centrality chain comprises the steps of:
and taking the video frame in which the key point corresponding to the centrality larger than the preset first threshold value in each centrality chain is positioned as a segmentation frame.
4. The system for managing clinical simulation teaching data of claim 1, wherein the step of classifying all key points into a plurality of attribute categories based on centrality of all key points on the divided frames comprises the steps of:
constructing a centrality histogram according to centrality of all key points in the segmentation frame, wherein an abscissa in the centrality histogram is the centrality size, and an ordinate is the number of key points corresponding to each centrality; performing otsu multi-threshold segmentation on the centrality histogram, dividing the centrality into a plurality of categories, and forming a category by all corresponding key points in each category as attribute categories.
5. The system for managing clinical simulation teaching data of nursing according to claim 1, wherein the step of obtaining the key chain of the subject according to the attribute type of the divided frame comprises the steps of:
and acquiring a key chain to which each key point in the attribute type corresponding to the maximum centrality mean value of the divided frames belongs as a main key chain.
6. The system for managing clinical simulation teaching data of claim 1, wherein the step of obtaining the gray scale variation value of the divided frame according to the gray scale values of all key points on each main key chain of the divided frame comprises the steps of:
and acquiring the variance of the gray values of all key points on each main body key chain, taking the variance as the gray variance of each main body key chain, and taking the average value of the gray variances of all main body key chains of the divided frames as the gray variation value of the divided frames.
7. The system for managing clinical simulation teaching data of claim 1, wherein the step of acquiring the subject area of each video frame in the image group based on the subject keypoints in the image of interest in the image group comprises the steps of:
and taking all key points in the attribute type corresponding to the maximum centrality mean value in the concerned image in each image group as main body key points in the concerned image, taking key points on the same key chain with the main body key points in the rest video frames in the image group as main body key points of each video frame, and taking a convex hull region formed by all the main body key points of each video frame in the image group as a main body region of each video frame.
8. The system for managing clinical simulation teaching data of claim 1, wherein the step of obtaining the filter window size based on centrality of all key points in the non-body area of the video frame comprises the steps of:
taking the average value of centrality of all key points in a non-main area of the video frame as the centrality of the non-main area of the video frame, rounding and rounding the inverse of the centrality of the non-main area to obtain the filter window size of the non-main area.
CN202310537484.2A 2023-05-15 2023-05-15 Clinical simulation teaching data management system of nursing Active CN116389761B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310537484.2A CN116389761B (en) 2023-05-15 2023-05-15 Clinical simulation teaching data management system of nursing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310537484.2A CN116389761B (en) 2023-05-15 2023-05-15 Clinical simulation teaching data management system of nursing

Publications (2)

Publication Number Publication Date
CN116389761A CN116389761A (en) 2023-07-04
CN116389761B true CN116389761B (en) 2023-08-08

Family

ID=86978920

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310537484.2A Active CN116389761B (en) 2023-05-15 2023-05-15 Clinical simulation teaching data management system of nursing

Country Status (1)

Country Link
CN (1) CN116389761B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018006825A1 (en) * 2016-07-08 2018-01-11 腾讯科技(深圳)有限公司 Video coding method and apparatus
CN109697416A (en) * 2018-12-14 2019-04-30 腾讯科技(深圳)有限公司 A kind of video data handling procedure and relevant apparatus
CN109982071A (en) * 2019-03-16 2019-07-05 四川大学 The bis- compression video detecting methods of HEVC based on time space complexity measurement and local prediction residual distribution
CN116074585A (en) * 2023-03-03 2023-05-05 乔品科技(深圳)有限公司 Super-high definition video coding and decoding method and device based on AI and attention mechanism

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018006825A1 (en) * 2016-07-08 2018-01-11 腾讯科技(深圳)有限公司 Video coding method and apparatus
CN109697416A (en) * 2018-12-14 2019-04-30 腾讯科技(深圳)有限公司 A kind of video data handling procedure and relevant apparatus
CN109982071A (en) * 2019-03-16 2019-07-05 四川大学 The bis- compression video detecting methods of HEVC based on time space complexity measurement and local prediction residual distribution
CN116074585A (en) * 2023-03-03 2023-05-05 乔品科技(深圳)有限公司 Super-high definition video coding and decoding method and device based on AI and attention mechanism

Also Published As

Publication number Publication date
CN116389761A (en) 2023-07-04

Similar Documents

Publication Publication Date Title
US8290295B2 (en) Multi-modal tone-mapping of images
CN109325550B (en) No-reference image quality evaluation method based on image entropy
Guo et al. A new method of detecting micro-calcification clusters in mammograms using contourlet transform and non-linking simplified PCNN
CN110163111B (en) Face recognition-based number calling method and device, electronic equipment and storage medium
Khan et al. Localization of radiance transformation for image dehazing in wavelet domain
CN102332162A (en) Method for automatic recognition and stage compression of medical image regions of interest based on artificial neural network
CN108447059B (en) Full-reference light field image quality evaluation method
WO2022194152A1 (en) Image processing method and apparatus based on image processing model, and electronic device, storage medium and computer program product
CN112529870A (en) Multi-scale CNNs (CNNs) lung nodule false positive elimination method based on combination of source domain and frequency domain
Yue et al. Perceptual quality assessment of enhanced colonoscopy images: A benchmark dataset and an objective method
Cai et al. Identifying architectural distortion in mammogram images via a se-densenet model and twice transfer learning
CN113554739A (en) Relighting image generation method and device and electronic equipment
CN116071337A (en) Endoscopic image quality evaluation method based on super-pixel segmentation
CN116389761B (en) Clinical simulation teaching data management system of nursing
CN112712482B (en) Image defogging method based on linear learning model
CN113989588A (en) Self-learning-based intelligent evaluation system and method for pentagonal drawing test
CN111953989A (en) Image compression method and device based on combination of user interaction and semantic segmentation technology
CN115147895B (en) Face fake identifying method and device
CN114155400B (en) Image processing method, device and equipment
CN111428713B (en) Automatic ultrasonic image classification method based on feature fusion
Li Image super-resolution algorithm based on RRDB model
CN110458223B (en) Automatic detection method and detection system for bronchial tumor under endoscope
CN113033585A (en) Big data based image identification method
Vo et al. Multi-Range Fusion for X-ray Image Enhancement
CN112995488B (en) High-resolution video image processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant