CN118226983A - Touchable stereoscopic display system - Google Patents
Touchable stereoscopic display system Download PDFInfo
- Publication number
- CN118226983A CN118226983A CN202410521441.XA CN202410521441A CN118226983A CN 118226983 A CN118226983 A CN 118226983A CN 202410521441 A CN202410521441 A CN 202410521441A CN 118226983 A CN118226983 A CN 118226983A
- Authority
- CN
- China
- Prior art keywords
- pixel
- information
- data
- lower computer
- action
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000009471 action Effects 0.000 claims abstract description 83
- 239000011159 matrix material Substances 0.000 claims abstract description 52
- 230000005540 biological transmission Effects 0.000 claims abstract description 19
- 238000004891 communication Methods 0.000 claims abstract description 10
- 238000000034 method Methods 0.000 claims description 24
- 230000033001 locomotion Effects 0.000 claims description 7
- 230000006855 networking Effects 0.000 claims description 6
- 230000008859 change Effects 0.000 claims description 4
- 230000001360 synchronised effect Effects 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 abstract description 6
- 230000003993 interaction Effects 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 238000001514 detection method Methods 0.000 description 5
- 230000004438 eyesight Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 230000008447 perception Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 241000282472 Canis lupus familiaris Species 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 230000000670 limiting effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 239000011324 bead Substances 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 230000030279 gene silencing Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000004297 night vision Effects 0.000 description 1
- 210000001328 optic nerve Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/0412—Digitisers structurally integrated in a display
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention provides a touchable stereoscopic display system, which relates to the technical field of visual assistance and comprises an upper computer and a lower computer, wherein the upper computer comprises a collection unit for obtaining image data, a storage unit for storing data, a data processing unit for processing the obtained and stored data and a data transmission unit for carrying out data communication; the lower computer comprises at least one pixel module matrix which is formed by a plurality of pixel action cylinders and is arranged in a matrix mode. The lower computer generates a corresponding control instruction according to the space information sent by the upper computer, and controls the pixel action cylinder to lift. The three-dimensional structure generated after lifting enables a user not to actually contact an object, external information can be obtained by touching the three-dimensional structure, the information obtaining mode is more visual and safe, the exploration space is larger, and the information accuracy is higher. The information such as the touch or click position, area, force and track of the user is fed back to the upper computer in real time to form interaction.
Description
Technical Field
The invention relates to the technical field of vision assistance, in particular to a touchable stereoscopic display system.
Background
The blind person cannot obtain scene information without the help of normal vision personnel and other equipment due to eyes or optic nerves and the like, and cannot feel photos and dynamic images, such as current environment, photos of family members, various cartoon pictures, movies and the like. Although some information can be obtained with the help of normal vision personnel and some voice broadcasting equipment, even with the help of the guide dogs, the blind cannot always accompany the normal vision personnel, and even each blind has enough economic capability to support the normal vision personnel to accompany and feed the guide dogs for a long time.
Auxiliary devices commonly used in the market at present comprise a blind stick, an acoustic prompt device, a vibration prompt device, a Braille point display, an implantable blind assisting device and the like. However, these auxiliary devices still have significant drawbacks, such as the fact that the cane is prone to interference with others and the narrow detection range; the Braille point display is only used for displaying Braille, and scene information and image information cannot be displayed intuitively; the implanted blind assisting device is infringed on a human body; the sound prompting equipment is easy to interfere with the judgment of the blind person and has delay; the vibration prompting device only prompts the blind person, but cannot give more information to the blind person.
Therefore, how to provide a blind auxiliary device with larger detection range, more information acquisition and safer use is a problem to be solved at present.
Disclosure of Invention
In order to improve the above problems, the present invention provides a touchable stereoscopic display system.
In a first aspect of the embodiments of the present invention, a touchable stereoscopic display system is provided, including an upper computer and a lower computer,
The upper computer comprises an acquisition unit, a storage unit, a data processing unit and a data transmission unit, wherein the acquisition unit is used for acquiring image data, the storage unit is used for storing cache data, offline data and pre-stored data, the data processing unit is used for processing the image data acquired by the acquisition unit and the data transmission unit or the data stored by the storage unit, and the data transmission unit is used for data communication between the upper computer and the lower computer, between the upper computer and the upper computer, and between the upper computer and other external equipment;
The lower computer comprises at least one pixel module matrix, wherein the pixel module matrix consists of a control main board, a power supply module and a plurality of pixel action cylinders which are arranged in a matrix manner, and each pixel action cylinder comprises an action lifting cylinder, a motor for driving the action lifting cylinder, an action recognition module and a plurality of types of sensors;
The data processing unit generates space information for indicating a lower computer according to the image data acquired by the acquisition unit, the data transmission unit or the data stored by the storage unit, and the space information is sent to the lower computer or other upper computers through the data transmission unit;
And the lower computer generates a corresponding control instruction according to the space information sent by the upper computer, and controls the pixel action cylinders in the pixel module matrix to lift.
Optionally, the method for generating the spatial information for indicating the lower computer by the data processing unit according to the image data acquired by the acquisition unit specifically includes:
According to the resolution of the data and the size of the pixel module matrix, determining the surrounding environment corresponding to each pixel action cylinder and the projection area of spatial information in image data of other sources; and generating spatial information matched with the pixel module matrix according to the projection area.
Optionally, the method for generating the spatial information for indicating the lower computer by the data processing unit according to the image data acquired by the acquisition unit specifically further includes:
And determining the space height information corresponding to each pixel action cylinder according to the picture stereoscopic scene information of the image data.
Optionally, the method for generating the spatial information for indicating the lower computer by the data processing unit further includes:
and acquiring externally input image data or other data, and generating space information for indicating a lower computer according to the externally input data.
Optionally, the lower computer further includes an action recognition module, configured to recognize an operation action performed by a user on the pixel module matrix, and the method for controlling the pixel action cylinder in the pixel module matrix to lift by the lower computer further includes:
acquiring the identified operation action and the specific position of the user for executing the operation action;
and generating a control instruction corresponding to the operation action according to a preset instruction generation strategy.
Optionally, the method for controlling the pixel action cylinder in the pixel module matrix to lift by the lower computer further comprises:
detecting whether other lower computers which have established communication connection exist;
And controlling a lower computer of the touch type stereoscopic display system to generate a control instruction corresponding to the current allocation display strategy according to the preset allocation display strategy.
Optionally, the method for controlling the pixel action cylinder in the pixel module matrix to lift by the lower computer further comprises:
networking and linking a plurality of touchable stereoscopic display systems;
a plurality of touchable stereoscopic displays which complete networking linkage mutually display the data of the other party;
When each touchable stereoscopic display detects a user operation action, other displays synchronously map the user operation action, so that the display data of each touchable stereoscopic display are synchronous.
Optionally, the method for generating the spatial information for indicating the lower computer by the data processing unit according to the image data acquired by the acquisition unit specifically includes:
Acquiring three-dimensional coordinate values and distance information of each image pixel in the image data; the three-dimensional coordinate value is a three-dimensional coordinate in a three-dimensional coordinate system established by the image pixel content in the real scene, and the distance information is a real distance between the image pixel content and the acquisition unit in the real scene;
Constructing a three-dimensional dynamic model in a data processing unit according to the three-dimensional coordinate values and the distance information;
And generating corresponding space information according to the three-dimensional dynamic model and the view field change proportion conversion strategy.
Optionally, when the ratio is that one of the pixel action cylinders corresponds to a plurality of image pixels, the method for determining the spatial height information corresponding to each pixel action cylinder includes:
acquiring three-dimensional coordinate values and distance information of a plurality of image pixels corresponding to the pixel action cylinder;
Respectively calculating the average value of the three-dimensional coordinate value and the distance information;
And obtaining the space height information corresponding to the pixel action cylinder based on the average value.
Optionally, the upper computer generates the spatial information at a preset refresh frequency and sends the spatial information to the lower computer.
Optionally, for different pixel module matrixes or different pixel action cylinders in the same pixel module matrix, the upper computer generates spatial information at different refresh frequencies and sends the spatial information to the lower computer
In summary, the invention provides a touchable stereoscopic display system, wherein the pixel module matrix of the lower computer can generate a corresponding stereoscopic structure to present a relief effect based on the external situation acquired by the upper computer, a user can acquire external information by touching the stereoscopic structure generated by the lower computer without actually touching the photographed object, the acquired information mode is more visual and safer, and the accuracy of the information is higher.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a system architecture of a touch-enabled stereoscopic display system according to an embodiment of the invention;
FIG. 2 is a schematic diagram of a pixel module matrix according to an embodiment of the present invention;
FIG. 3 is a top view of a pixel module matrix according to an embodiment of the invention;
fig. 4 is a schematic structural diagram of a pixel actuating cylinder according to an embodiment of the present invention.
Icon:
an upper computer 100; a lower computer 200; an acquisition unit 110; a data processing unit 120; a data transmission unit 130; a storage unit 140; a pixel operation barrel 210; a control main board 220; a power module 230; an operation lifting cylinder 211; a motor 212.
Detailed Description
Auxiliary devices commonly used in the market at present comprise a blind stick, an acoustic prompt device, a vibration prompt device, a Braille point display, an implantable blind assisting device and the like. However, these auxiliary devices still have significant drawbacks, such as the fact that the cane is prone to interference with others and the narrow detection range; the Braille point display is only used for displaying Braille, and scene information and image information cannot be displayed intuitively; the implanted blind assisting device is infringed on a human body; the sound prompting equipment is easy to interfere with the judgment of the blind person and has delay; the vibration prompting device only prompts the blind person, but cannot give more information to the blind person
Therefore, the provision of a blind person auxiliary device which has a larger detection range, acquires more information and is safer to use is a problem to be solved at present.
In view of this, the designer designs a touchable stereoscopic display system, in which the pixel module matrix of the lower computer can generate a corresponding stereoscopic structure to be a relief effect based on the external situation acquired by the upper computer, so that a user can acquire external information by touching the stereoscopic structure generated by the lower computer without actually touching the photographed object, the acquired information mode is more visual and safer, and the accuracy of the information is higher.
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
In the description of the present invention, it should be noted that, directions or positional relationships indicated by terms such as "top", "bottom", "inner", "outer", etc., are directions or positional relationships based on those shown in the drawings, or those that are conventionally put in use, are merely for convenience in describing the present invention and simplifying the description, and do not indicate or imply that the apparatus or elements referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and the like, are used merely to distinguish between descriptions and should not be construed as indicating or implying relative importance.
In the description of the present invention, it should also be noted that, unless explicitly specified and limited otherwise, the terms "disposed," "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
It should be noted that, without conflict, the embodiments of the present invention and features of the embodiments may be combined with each other.
Referring to fig. 1, a touchable stereoscopic display system is provided in this embodiment.
As shown in fig. 1, the touchable stereoscopic display system provided by the invention comprises an upper computer 100 and a lower computer 200, wherein the upper computer 100 comprises an acquisition unit 110, a storage unit 140, a data processing unit 120 and a data transmission unit 130, the acquisition unit 110 is used for acquiring image data, the storage unit 140 is used for storing cache data, offline data and pre-stored data, the data processing unit 120 is used for processing the image data acquired by the acquisition unit 110 and the data transmission unit 130 or the data stored by the storage unit 140, and the data transmission unit 130 is used for data communication between the upper computer 100 and the lower computer 200, between the upper computer 100 and between the upper computer 100 and other external devices;
The lower computer 200 includes at least one pixel module matrix, the pixel module matrix is composed of a control main board 220, a power module 230 and a plurality of pixel action cylinders 210 arranged in a matrix, the pixel action cylinders 210 include an action lifting cylinder 211, a motor 212 driving the action lifting cylinder 211, an action recognition module and a plurality of types of sensors;
the data processing unit 120 generates spatial information for indicating the lower computer 200 according to the image data acquired by the acquisition unit 110, the data transmission unit 130 or the data stored by the storage unit 140, and the spatial information is sent to the lower computer 200 or other upper computers 100 through the data transmission unit 130;
The lower computer 200 generates a control instruction corresponding to the spatial information according to the spatial information sent by the upper computer 100, and controls the pixel actuating cylinder 210 in the pixel module matrix to lift.
In an embodiment of the present invention, the acquisition unit 110 may be configured by a multi-camera, a radar camera, or a laser sensor, the storage unit 140 is configured by a storage module and an expansion interface, the data processing unit 120 is configured by a multi-core processing CPU motherboard and a power module, and the data transmission unit 130 may be configured by a WIFI, a bluetooth module, or a star flash module. The upper computer 100 may be formed by combining the four unit modules, or may be an integrated device including four unit modules, such as a mobile phone or other intelligent terminals. In actual use, the upper computer 100 can be worn on the head and neck of the blind person or clamped on a hat, glasses or collar placket worn by the blind person by a back clip, so as to facilitate the acquisition of image data by the acquisition unit 110.
The lower computer 200 is configured in two ways, one of which only comprises a pixel module matrix, and the array of the pixel module matrix is fixed; the other type of the display device comprises a plurality of pixel module matrixes, wherein the array of each pixel module matrix can be the same or different, and a user splices the plurality of pixel module matrixes according to the use requirement to obtain a needed matrix array. In practical use, the upper computer 100 detects the situation of the lower computer 200, and determines the number of connected pixel module arrays and the number of matrix elements of the matrix array, wherein each element corresponds to one pixel actuating cylinder 210.
The pixel module matrix is shown in fig. 2-3, and includes a control main board 220, a power module 230, and a plurality of pixel operation cylinders 210 arranged in a matrix. The control main board 220 controls the motion of each pixel motion tube 210 of the pixel module matrix, and a specific control manner is determined based on the spatial information sent by the host computer 100.
The invention provides a touchable stereoscopic display system, which comprises the following specific working processes:
Firstly, the acquisition unit 110 acquires image data under a real scene, and corresponds pixels in the image to the pixel action cylinders 210 in the pixel module matrix of the lower computer 200 according to a certain projection relationship, and then spatial information for indicating the lower computer 200 is generated according to the specific content of the pixels in the image, so that the lower computer 200 can respectively control the lifting action of each pixel action cylinder 210 according to the spatial information, and the three-dimensional structure finally formed by the pixel module matrix is matched with the content in the image.
In the actual use process, determining a projection area of spatial information in the image data corresponding to each pixel action cylinder 210 according to the resolution of the image data and the size of the pixel module matrix; and generating spatial information matched with the pixel module matrix according to the projection area.
Because the image resolution of the acquired image data is higher, the processing is generally divided into two cases, one is that the number of the pixel action cylinders 210 in the pixel module matrix is the same as the image resolution, and at this time, the pixel action cylinders 210 are in one-to-one correspondence with each image pixel, and in the generated spatial information, the indication of each pixel action cylinder 210 is in one-to-one correspondence with each image pixel. In another case, the number of the pixel operation cylinders 210 in the pixel module matrix is smaller than the image resolution, at this time, the image data needs to be compressed according to the size difference between the two, and the pixel operation cylinders 210 after compression correspond to a plurality of image pixels, and in the generated spatial information, the indication of each pixel operation cylinder 210 needs to be calculated by the corresponding plurality of image pixels. The specific calculation mode may be to calculate an average value of a plurality of image pixels, or may set weights of the image pixels at different positions according to specific requirements.
The spatial information obtained through the content of the image pixel in the image data needs to be calculated based on the three-dimensional coordinate value of the image pixel and the distance information, wherein the three-dimensional coordinate value is the three-dimensional coordinate of the image pixel content in the three-dimensional coordinate system established in the real scene, and the distance information is the real distance between the image pixel content and the acquisition unit 110 in the real scene. This is done in order to better restore the real condition of the person looking through the eyes. The three-dimensional coordinates are used to describe the specific position of the object, and the distance information is used to describe the distance between the object and the user. The two are comprehensively considered, so that the external situation can be truly reflected.
In a preferred embodiment, when spatial information is generated, the spatial information is processed according to a field-of-view change proportion conversion strategy, the more the pixel action cylinder 210 is positioned closer to the acquisition unit 110, the more the details are displayed, the less the pixel action cylinder 210 is positioned farther from the acquisition unit 110, and the less the details are displayed.
As another embodiment, the spatial information may be generated completely according to the actual scale in the real scene, and the greater the distance acquisition unit 110 is, the lower the elevation of the operation barrel is. The smaller the distance collecting unit 110, the higher the action cylinder rises.
The following is a specific example:
the image resolution of the host computer 100 is 1920×1080, and each pixel has a three-dimensional coordinate value and distance information.
When the pixel array of the lower computer 200 is 1920×1080, the display information and the image data of the upper computer 100 can be in one-to-one correspondence, and no clipping and compression are required. The height of the lower computer 200 raised and lowered by each operation raising and lowering cylinder 211 corresponds to the three-dimensional coordinate value and distance information transmitted from the upper computer 100 in a certain relationship.
When the lower computer 200 pixels are not 1920×1080, information needs to be cut and compressed, and the upper computer 100 pixels and the lower computer 200 pixels are cut and compressed in proportion. For example, if the lower computer 200 is 192×108, every 10×10 pixels are processed into one pixel, and the distance displayed to the lower computer 200 is the average value of 10×10 pixels.
Through the above-mentioned process, based on different strategies, the spatial height information corresponding to each pixel action barrel 210 can be calculated respectively, and the spatial height information is combined to be the current spatial information.
When the image data acquired by the acquisition unit 110 is processed in real time, the touchable stereoscopic display system also has a dangerous environment recognition function, and when facing to a pit, too large drop height, a water area such as a lake sea, a high-temperature object, a sharp object and the like, dangerous conditions are judged through an image recognition technology, and the user is warned through sound, integral vibration or vibration for displaying the dangerous object, so that the user is prompted to keep away or avoid.
After the spatial information is sent to the lower level, the lower level computer 200 correspondingly generates specific instructions for each pixel action cylinder 210, and the control main board 220 controls the pixel action cylinders 210 to complete specific lifting actions.
As shown in fig. 4, the specific structure of the pixel operation cylinder 210 includes an operation lifting cylinder 211 and a motor 212 for driving the operation lifting cylinder 211. The operation lifting cylinder 211 is movable in the vertical direction, and is driven by a motor 212 when necessary.
In other embodiments, the host computer 100 may generate spatial information for instructing the lower computer 200 directly from externally input image data or other data, in addition to the spatial information generated based on the image data acquired by the acquisition unit 110. The specific generation manner is the same as the manner of generating the spatial information by the image data acquired by the acquisition unit 110 described above.
The following usage scenarios may be specifically aimed at:
1. the built-in or external networking high-precision map is obtained, the high-precision map is displayed on the lower computer 200 by using a real-time stereoscopic image, and a user perceives dynamic changes of the map through touching to find directions and routes.
2. The three-dimensional teaching demonstration device is used for normal teaching activities and is used for three-dimensionally demonstrating objects for groups such as blind children or infants and the like so as to help the blind children or the infants to know the world. The upper computer 100 sends out an instruction for displaying the automobile, and the lower computer 200 displays the three-dimensional image of the automobile for the touch perception of a blind child or an infant; the upper computer 100 sends out an instruction for displaying a sculpture or animal, and the lower computer 200 displays a stereoscopic image of the sculpture or animal for the blind child or infant to touch and sense.
3. The other person communicates with the blind person, and sends the image to be displayed to the upper computer 100, and the upper computer 100 controls the lower computer 200 to display the stereoscopic image of the corresponding image.
As a preferred embodiment, the top end of the pixel action barrel 210 may also be provided with an action recognition module or a plurality of types of sensors for recognizing the operation actions performed by the user on the pixel module matrix. And after the specific operation action and the specific position of the user for executing the operation action are identified, generating a control instruction corresponding to the operation action according to a preset instruction generation strategy.
The correspondence between the operation action and the control instruction may be set in advance, and the specific position where the operation action is executed.
For example, 1: the user uses a finger to draw a circle (or double-click, or long-time stay) on a certain portion of the display area, and then performs a spatial image enlarging function of the circled area (i.e., enlarging the stereoscopic display information of the circled area to the stereoscopic display of the whole device). Gestures, action-related functions may be customizable.
Note that the image enlarging function is that the upper computer 100 reprocesses the acquisition scene according to the feedback command of the enlarged region, and sends the processed image of the required enlarged region to the lower computer 200 for display.
For example 2: the user draws a C (or multiple fingers simultaneously sliding left) within the display area using the fingers, and performs a return to full screen display.
For example 3: the user draws a Z (or multiple fingers simultaneously sliding to the right) in the display area using the fingers, and enters the navigation interface. On the basis of a stereoscopic scene, the navigation direction is simultaneously displayed in the form of a floating arrow.
For example 4: when the electrically controlled pixel actuating cylinder 210 in a certain part of the display area is lifted, the user presses the part of the electrically controlled pixel actuating cylinder 210. Moving in any direction, the information can be controlled to the outside. And transmitted to other application programs through the upper computer 100, for example, to control movement of objects in a game screen.
For example 5: when the user slides on the display surface simultaneously with two fingers and the sliding distance is increased or decreased, the upper computer 100 adjusts the focal length of the multi-camera to be increased or decreased in a follow-up manner, so that the user can observe the far or near scene conveniently.
On this basis, in addition to the recognition of the touch action, the pressure applied by the user to the pixel action cylinder 210 can be further detected. When the capacitive sensing cannot be activated when the touch is made by other objects, the touch state can be judged according to the pressure change. Pressure type touch feedback can be used as an input control source.
For example, 1: the contact area between the object and the stereoscopic display can be displayed on the upper computer 100 according to the pressure in a certain area.
For example 2: the thrust of the pixel motion cylinder 210 can be adapted to the hand touch strength habit of each user. When a user with a large hand force is encountered, the pixel actuation barrel 210 torque is increased. When a user with little hand strength is encountered, the pixel actuation barrel 210 torque is reduced.
It should be noted that, when the user starts to use the touchable stereoscopic display system provided by the embodiment of the invention for the first time, the lower computer 200 does not need to be calibrated when it is composed of a single pixel module matrix, and needs to be displayed and calibrated when it is composed of a plurality of pixel module matrices. The specific method is as follows:
After the equipment is started, the voice prompts the user to touch from leftmost to rightmost, and each is executed once from leftmost to bottommost, so that calibration is completed, and the relative positions of each matrix in the equipment are determined. The defined range is the pixel action barrel 210 related to the user in use.
As a preferred embodiment, in order to reflect the external situation more timely and accurately, the upper computer 100 generates the spatial information at a preset refresh frequency and transmits the spatial information to the lower computer 200.
Typically, the refresh frequency ranges from 1 to 25 frames per second. The system can be set by a user or automatically adjusted according to the use environment, for example, 15 frames are used when the user slowly walks indoors; in an outdoor environment, 25 frames per second.
On this basis, in order to further improve the user experience and save control resources, for different pixel module matrixes or different pixel action cylinders 210 in the same pixel module matrix, the upper computer 100 generates spatial information at different refresh frequencies and sends the spatial information to the lower computer 200. For example, the area touched by the blind hand in real time is refreshed 15-25 frames per second, and the area which is not covered by the blind hand in real time is detected to be refreshed 1 frame per second. Or differentiating according to the display content, when displaying static images, for example, when displaying photos, for example, 1 frame per second, and when displaying actual scenes, for example, when displaying pictures shot by the acquisition unit 110 in real time, for example, 25 frames per second.
As a preferred embodiment, the user can directly control the system by language mode besides using gesture. Such as voice commands to adjust the display frame rate, etc.
As another embodiment, it may be detected whether there are other touchable stereoscopic display systems that have established communication connection, and when a plurality of touchable stereoscopic display systems establish communication connection, a plurality of touchable stereoscopic display systems are networked and linked, and a plurality of touchable stereoscopic displays that complete the networking and linking mutually display data of each other. The lower computer 200 of the touch-type stereoscopic display system can be controlled to generate a control instruction corresponding to the current allocation display strategy according to the preset allocation display strategy. When each touchable stereoscopic display detects a user operation action, the other displays synchronously map the action, so that the display data of each touchable stereoscopic display are synchronous.
Based on this manner of operation, an object lift map controller may be formed. The opponent scene may be displayed to each other by a plurality of touchable stereoscopic display systems. For example, when multiple touchable stereoscopic display systems are used simultaneously, as each system places one ball, the other systems map this action synchronously as the user manipulates or moves one of the balls, synchronizing the relative positions of the balls shown by each lower computer 200.
The stereoscopic perception game or the multi-person interaction perception game for the blind can also be developed.
For example, 1: simple ball rebound game: A. after the equipment of the two users B are linked with each other, after the user A ejects a small ball on the equipment, the information such as the track, the speed and the like of the small ball can be collected by the sensor of the pixel action cylinder 210 and then transmitted to the equipment of the user B, the equipment of the user B can reproduce the track and the speed of the small ball through the lifting of the pixel action cylinder 210, and the user B can feed back the movement information of the small ball on the equipment to the user A in the same way, so that interactive games are realized in a reciprocating way.
For example 2: three-dimensional five-seed continuous bead game: rows and columns of every 5 electronically controlled pixel actuation cartridges 210 on the device are raised 5mm, shown as checkered lines. 5*5 the chess pieces (5*5 pixels diagonal to one piece is lifted and 3*3 pixels centered on the center point of each cell is lifted to the other piece). Two users can set different gestures or clicking actions to complete the chess playing actions, and when one party presses within a certain grid line range on the equipment, the grid line is lifted to display the chess pieces carried by the two users. When the user is the blind person, the blind person can feel the positions of the grid lines and the shapes and arrangement positions of the different chessmen through touching, and the game is completed.
For example 3: stereoscopic version russia block: the user knows the shape of the square that appears by touching and adjusts the square to correspond to the gap and achieve cancellation.
The touchable stereoscopic display system provided by the embodiment of the invention not only can be used for the blind to provide visual assistance, but also can be used for scenes in which objects cannot be seen normally.
For example, through a multiple input interface: the system can be connected with an unmanned plane, a synchronous mobile phone and a computer screen, and a sonar detection system and a low-light night vision display.
And the method can also be used for completely silencing the information communication interaction of two or more parties under the scene without sound and light. For example: when the underwater lamplight cannot penetrate and irradiate far away, a diver cannot visually observe the periphery of the underwater in a long distance, and underwater stereoscopic images detected by underwater sonar can be displayed on the display. The diver perceives the front environment by touching the displayed stereoscopic presentation.
In summary, the invention provides a touchable stereoscopic display system, wherein the pixel module matrix of the lower computer can generate a corresponding stereoscopic structure to present a relief effect based on the external situation acquired by the upper computer, a user can acquire external information by touching the stereoscopic structure generated by the lower computer without actually touching the photographed object, the acquired information mode is more visual and safer, and the accuracy of the information is higher.
In the several embodiments disclosed herein, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative, for example, of the flowcharts and block diagrams in the figures that illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Claims (10)
1. A touch type three-dimensional display system is characterized by comprising an upper computer and a lower computer,
The upper computer comprises an acquisition unit, a storage unit, a data processing unit and a data transmission unit, wherein the acquisition unit is used for acquiring image data, the storage unit is used for storing cache data, offline data and pre-stored data, the data processing unit is used for processing the image data acquired by the acquisition unit and the data transmission unit or the data stored by the storage unit, and the data transmission unit is used for data communication between the upper computer and the lower computer, between the upper computer and the upper computer, and between the upper computer and other external equipment;
The lower computer comprises at least one pixel module matrix, wherein the pixel module matrix consists of a control main board, a power supply module and a plurality of pixel action cylinders which are arranged in a matrix manner, and each pixel action cylinder comprises an action lifting cylinder, a motor for driving the action lifting cylinder, an action recognition module and a plurality of types of sensors;
The data processing unit generates space information for indicating a lower computer according to the image data acquired by the acquisition unit, the data transmission unit or the data stored by the storage unit, and the space information is sent to the lower computer or other upper computers through the data transmission unit;
And the lower computer generates a corresponding control instruction according to the space information sent by the upper computer, and controls the pixel action cylinders in the pixel module matrix to lift.
2. The touchable stereoscopic display system according to claim 1, wherein the method for generating spatial information for indicating the lower computer by the data processing unit from the image data acquired by the acquisition unit, received by the data transmission unit and stored in the storage unit specifically comprises:
Determining a projection area of spatial information in the image data pixels corresponding to each pixel action cylinder according to the resolution of the image data and the size of the pixel module matrix;
And generating spatial information matched with the pixel module matrix according to the projection area.
3. The touchable stereoscopic display system according to claim 2, wherein the method for generating spatial information for indicating the lower computer by the data processing unit according to the image data acquired by the acquisition unit specifically further comprises:
And determining the space height information corresponding to each pixel action cylinder according to the picture stereoscopic scene information of the image data.
4. A touchable stereoscopic display system according to claim 3, wherein the method of the data processing unit generating spatial information for indicating a lower computer further comprises:
and acquiring externally input image data or other data, and generating space information for indicating a lower computer according to the externally input data.
5. A touchable stereoscopic display system according to claims 1-3, wherein the lower computer further comprises an action recognition module for recognizing an operation action performed by a user on the pixel module matrix, the method for controlling the pixel action barrel in the pixel module matrix to be lifted by the lower computer further comprising:
acquiring the identified operation action and the specific position of the user for executing the operation action;
and generating a control instruction corresponding to the operation action according to a preset instruction generation strategy.
6. A touchable stereoscopic display system according to claims 1-3, wherein the method for the lower computer to control the lifting of the pixel actuation cylinders in the pixel module matrix further comprises:
detecting whether other lower computers which have established communication connection exist;
And controlling a lower computer of the touch type stereoscopic display system to generate a control instruction corresponding to the current allocation display strategy according to the preset allocation display strategy.
7. The touch-enabled stereoscopic display system of claim 1, wherein the method for the lower computer to control the lifting of the pixel actuation cylinder in the pixel module matrix further comprises:
networking and linking a plurality of touchable stereoscopic display systems;
a plurality of touchable stereoscopic displays which complete networking linkage mutually display the data of the other party;
When each touchable stereoscopic display detects a user operation action, other displays synchronously map the user operation action, so that the display data of each touchable stereoscopic display are synchronous.
8. A touchable stereoscopic display system according to claims 2-3, wherein the method for generating spatial information for indicating a lower computer by the data processing unit from the image data acquired by the acquisition unit comprises:
Acquiring three-dimensional coordinate values and distance information of each image pixel in the image data; the three-dimensional coordinate value is a three-dimensional coordinate in a three-dimensional coordinate system established by the image pixel content in the real scene, and the distance information is a real distance between the image pixel content and the acquisition unit in the real scene;
Constructing a three-dimensional dynamic model in a data processing unit according to the three-dimensional coordinate values and the distance information;
And generating corresponding space information according to the three-dimensional dynamic model and the view field change proportion conversion strategy.
9. The touch-enabled stereoscopic display system of claim 8, wherein when the ratio is that one of the pixel motion cylinders corresponds to a plurality of image pixels, the method of determining the spatial height information corresponding to each of the pixel motion cylinders comprises:
acquiring three-dimensional coordinate values and distance information of a plurality of image pixels corresponding to the pixel action cylinder;
Respectively calculating the average value of the three-dimensional coordinate value and the distance information;
And obtaining the space height information corresponding to the pixel action cylinder based on the average value.
10. The touch-type stereoscopic display system according to claim 1, wherein the upper computer generates spatial information at a preset refresh frequency and transmits the spatial information to the lower computer, and for different pixel module matrices or different pixel action cylinders in the same pixel module matrix, the upper computer generates spatial information at different refresh frequencies and transmits the spatial information to the lower computer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410521441.XA CN118226983A (en) | 2024-04-28 | 2024-04-28 | Touchable stereoscopic display system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410521441.XA CN118226983A (en) | 2024-04-28 | 2024-04-28 | Touchable stereoscopic display system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118226983A true CN118226983A (en) | 2024-06-21 |
Family
ID=91511197
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410521441.XA Pending CN118226983A (en) | 2024-04-28 | 2024-04-28 | Touchable stereoscopic display system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118226983A (en) |
-
2024
- 2024-04-28 CN CN202410521441.XA patent/CN118226983A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3688562B1 (en) | Detecting physical boundaries | |
US10235807B2 (en) | Building holographic content using holographic tools | |
AU2022224790A1 (en) | Interactions with 3D virtual objects using poses and multiple-DOF controllers | |
Wilson | Depth-sensing video cameras for 3d tangible tabletop interaction | |
EP3616030B1 (en) | Navigating a holographic image | |
JP6558839B2 (en) | Intermediary reality | |
US20160210780A1 (en) | Applying real world scale to virtual content | |
US20150002419A1 (en) | Recognizing interactions with hot zones | |
US20150379770A1 (en) | Digital action in response to object interaction | |
IL293424A (en) | Neural network training with displays of user interface devices | |
US20140118255A1 (en) | Graphical user interface adjusting to a change of user's disposition | |
CN105283824A (en) | Virtual interaction with image projection | |
CN104272218A (en) | Virtual hand based on combined data | |
KR20160150565A (en) | Three-dimensional user interface for head-mountable display | |
US10853966B2 (en) | Virtual space moving apparatus and method | |
EP3109833B1 (en) | Information processing device and information processing method | |
EP3591503B1 (en) | Rendering of mediated reality content | |
CN114514493A (en) | Reinforcing apparatus | |
CN107924586B (en) | Method, apparatus, and computer-readable storage medium for searching image content | |
JP2018524684A (en) | Intermediary reality | |
WO2018042948A1 (en) | Information processing system, method of information processing, and program | |
CN118226983A (en) | Touchable stereoscopic display system | |
WO2021041428A1 (en) | Method and device for sketch-based placement of virtual objects | |
CN113677412A (en) | Information processing apparatus, information processing method, and program | |
US10409464B2 (en) | Providing a context related view with a wearable apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |