WO2022057670A1 - Real-time focusing method, apparatus and system, and computer-readable storage medium - Google Patents
Real-time focusing method, apparatus and system, and computer-readable storage medium Download PDFInfo
- Publication number
- WO2022057670A1 WO2022057670A1 PCT/CN2021/116773 CN2021116773W WO2022057670A1 WO 2022057670 A1 WO2022057670 A1 WO 2022057670A1 CN 2021116773 W CN2021116773 W CN 2021116773W WO 2022057670 A1 WO2022057670 A1 WO 2022057670A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video source
- source data
- captured image
- real
- projection
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 230000008569 process Effects 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 12
- 238000000605 extraction Methods 0.000 claims description 12
- 230000006870 function Effects 0.000 claims description 6
- 238000011156 evaluation Methods 0.000 claims description 4
- 230000001360 synchronised effect Effects 0.000 claims description 4
- 238000003384 imaging method Methods 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 8
- 238000001514 detection method Methods 0.000 description 4
- 238000009434 installation Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 238000005314 correlation function Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000020169 heat generation Effects 0.000 description 1
- 238000003064 k means clustering Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000011946 reduction process Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
Definitions
- Autofocus is a very important function of the projector, so that the projector can be adapted to more usage scenarios.
- the specific needs can be refined into the following two aspects: reset the focus surface after the first installation or move the position; In the open state, temperature changes will occur due to heat generation, and various components of the lens and optical path will cause thermal defocusing phenomenon under temperature changes; Real-time-like blurring cannot be corrected by current autofocus technology.
- the traditional focusing scheme needs to suspend the display and focus on a specific reference image, which cannot achieve non-sensing focusing; in addition, the traditional focusing scheme can also adjust the angle of the focusing motor by setting the motion sensor, but due to the error and noise of the motion sensor. Larger, the movement of the projector and the relative relationship between the projector and the screen are more complicated, and it is impossible to get better results, and it is impossible to adjust the blur caused by thermal defocus.
- the present application provides a real-time focusing method, device, system and computer-readable storage medium, which can realize automatic real-time focusing.
- the technical solution adopted in the present application is to provide a real-time focusing method, the method includes: acquiring video source data and captured images; The feature information of the captured image; based on the feature information of the video source data and the feature information of the captured image, a control command is generated, and the control command is sent to the motor, so that the motor drives the projection lens to move and realize focusing.
- the real-time focusing device includes a memory and a processor that are connected to each other, wherein the memory is used to store a computer program, and the computer program is used by the processor. When executed, it is used to implement the above-mentioned real-time focusing method.
- the projection system includes: a real-time focusing device, a projection device and a camera device, the real-time focusing device is used for receiving video source data;
- the focusing device is connected to receive the video source data sent by the real-time focusing device and perform projection display, wherein the projection device includes a projection lens and a motor that are connected to each other;
- the image is shot to obtain a shot image corresponding to the video source data; wherein, the real-time focusing device is also used to process the video source data and the shot image to obtain feature information of the video source data and feature information of the shot image; based on the video source data
- the feature information and the feature information of the captured image are generated, and the control command is sent to the motor, so that the motor drives the projection lens to move and realize focusing.
- another technical solution adopted in this application is to provide a computer-readable storage medium, the computer-readable storage medium is used to store a computer program, and when the computer program is executed by a processor, it is used to realize the above-mentioned Live focus method.
- the beneficial effects of the present application are: the received video source data is stored, and the projection device is controlled to display the video source data to obtain a captured image corresponding to the video source data; then the video source data and the captured image are processed. processing to obtain the corresponding feature information; using the feature information of the video source data and the feature information of the captured image, the adjustment direction of the projection lens can be determined; then according to the adjustment direction, the corresponding control instructions are generated to control the motor to drive the projection lens to move, which can be Focusing is performed at the same time as playing the screen, without manual focusing, real-time focusing and non-sensing focusing can be achieved, and thermal defocus can be corrected.
- FIG. 1 is a schematic flowchart of an embodiment of a real-time focusing method provided by the present application
- FIG. 2 is a schematic flowchart of another embodiment of a real-time focusing method provided by the present application.
- FIG. 3 is a schematic flowchart of step 131 in the embodiment shown in FIG. 2;
- FIG. 4 is a schematic diagram of a captured image in the embodiment shown in FIG. 2;
- FIG. 5 is a schematic structural diagram of an embodiment of a real-time focusing device provided by the present application.
- FIG. 6 is a schematic structural diagram of an embodiment of a projection system provided by the present application.
- Fig. 7 is the connection schematic diagram of the video source and the real-time focusing device in the embodiment shown in Fig. 6;
- FIG. 8 is a schematic structural diagram of an embodiment of a computer-readable storage medium provided by the present application.
- the application provides a real-time focusing method, the method includes:
- the video source data and the captured image are processed to obtain the feature information of the video source data and the feature information of the captured image;
- a control command is generated, and the control command is sent to the motor, so that the motor drives the projection lens to move and realize focusing.
- FIG. 1 is a schematic flowchart of an embodiment of a real-time focusing method provided by the present application. The method includes:
- Step 11 Acquire video source data and capture images.
- the video source data can be the pixel value of the image, different video source data can be received in different time periods, the video source data can be acquired online in real time, the video source data sent by other devices can be received, or the video source can be read from a storage device data; in order to facilitate the subsequent comparison with the captured images, the video source data can be stored in the real-time focusing device. It can be understood that the amount of video source data stored in the real-time focusing device is relatively small, and can be cleaned regularly, and the video source signals within a preset time before and after the current video source data can be obtained in real time, for example, the video source within 5 seconds can be obtained. data, which is cached.
- the video source data sent by the video source can be received, and then the video source data can be sent to the projection device, and the projection device can be controlled to display the video source data to generate a projection display image; after the projection device starts to display, the synchronous shooting can be sent.
- the signal is sent to the camera device to control the camera device to shoot the projected display image on the projection screen, so as to obtain the captured image corresponding to the video source data, and the captured image is at least partially the same as the video source data; in addition, the real-time focusing device Take out the video source data of the same frame that is buffered, so as to compare the video source data with the captured image.
- all the received video source data may also be buffered, and then a set of video source data is retrieved from the stored video source data for projection display.
- the acquisition of video source data may be performed every frame, or may be performed every several frames or several seconds.
- Step 12 Process the video source data and the captured image to obtain feature information of the video source data and feature information of the captured image.
- the captured image and video source data can be compared. Contrast; specifically, only a part of the captured image can be compared with a part of the video source data, that is, a local comparison method is used, or the entire region of the captured image can be compared with all the data of the video source data, that is, a global comparison is used.
- Comparison method if the video source data is clear and the captured image is clear, no focus adjustment is required; if the video source data is clear but the captured image is blurred, refocusing is required; further, after the captured image is acquired, image processing methods can be used to adjust the focus.
- the video source data and the captured image are analyzed and processed, and the feature information of the video source data and the feature information of the captured image are extracted respectively.
- blurring and clarity are relative terms. If the difference between the sharpness of the video source data and the captured image is within a range, it can be determined that the two are equally clear or blurred; otherwise, the captured image can be determined to be blurred.
- the feature information includes feature points
- a specific image processing algorithm or a deep learning algorithm can be used to process the video source data and the captured image to obtain a plurality of feature points and captured images in the video source data respectively.
- multiple feature points in the image for example, feature points in the image can be extracted using a feature extraction algorithm.
- Step 13 Based on the feature information of the video source data and the feature information of the captured image, a control command is generated, and the control command is sent to the motor, so that the motor drives the projection lens to move and realize focusing.
- the projection device includes a projection lens and a motor that are connected to each other. After the feature information is generated, a control command can be generated by using the feature information, and then the control command is sent to the motor, so that the motor drives the projection lens to move, so as to adjust the focal length and focus. position to achieve focus.
- the adjustment of the position of the projection lens is taken as an example for description, which specifically includes the following steps:
- Step 131 Determine the corresponding focus area based on the feature information of the video source data and the feature information of the captured image.
- the comparison is to compare the captured image with the source video data.
- the captured picture ie, the captured image
- the actual projected images ie, projected display images
- the video source data can be matched with the captured images first; further, the video sources can be compared.
- the data and the part of the captured image find the video source data corresponding to at least part of the captured image, and establish the corresponding relationship between the video source data and the captured image; for example, as shown in Figure 4, the captured image is recorded as A, and the captured image
- the local area in A is denoted as B, and the pixel values in the local area B are matched with the video source data, and the pixel value in the video source data with the smallest difference between the pixel values in the local area B is found.
- the area composed of these pixel values is The area that matches the local area B is the matching area.
- the distortion of the captured image can be removed, and then the corners or other feature points of the image can be found, and the positions of these corners or feature points in the video source data and the captured image correspond to relationship to find the matching area between the video source data and the captured image.
- the ambient light subtraction process may be performed after the captured image is de-distorted; in the case of weak ambient light or uniform ambient light, the ambient light reduction process may not be performed.
- Ambient light subtraction processing; or the difference between two different frames can be used to achieve ambient light subtraction. If the difference between two frames is less than a certain threshold, ambient light subtraction processing is not required, for example, the display is still Ambient light subtraction processing is not performed on the screen.
- Step 1312 Process the pixels of each matching area respectively to obtain a plurality of focus feature points.
- the matching area After matching, in order to further decide which part of the image to use for comparison, one or more specific areas can be selected from the matching area, but if there is no sharp image in the found area, you need to use some special
- the algorithm has higher requirements on the algorithm and longer processing time. Therefore, it is possible to find the sharpest area in the matching area, that is, the high spatial frequency area, and the high spatial frequency area includes multiple focus feature points.
- a gradient operator, a Laplacian operator or other edge extraction operators can be used to process the matching area in the video source data and the matching area in the captured image, for example, to extract pixels that change drastically in the matching area. , to obtain a plurality of corresponding high-frequency pixel points; specifically, the position where the pixel value changes violently corresponds to the high-frequency signal area (ie, the high spatial frequency area) in the image, such as the edge; the position where the pixel value changes little is for the low-frequency signal area. , such as a large color block area; because the high-frequency signal area is generally the edge or contour of the image, which can better represent the contour information of the image, so the high-frequency signal area is selected to determine the focus situation.
- the high-frequency signal area is generally the edge or contour of the image, which can better represent the contour information of the image, so the high-frequency signal area is selected to determine the focus situation.
- the pixel changes of each matching area can be counted, and filtered by the set threshold, and the area rich in sharply changed pixels is regarded as a high spatial frequency area, that is, it is determined whether the pixel value of each high frequency pixel is greater than the preset pixel. value; if the pixel value of the high-frequency pixel point is greater than the preset pixel value, the high-frequency pixel point is used as the focus feature point; if the pixel value of the high-frequency pixel point is less than or equal to the preset pixel value, the high-frequency pixel point is used. Pixels are discarded.
- Step 1313 Connect the adjacent focus feature points among the multiple focus feature points respectively to obtain the corresponding focus area.
- the adjacent focus feature points among the multiple focus features corresponding to the video source data can be connected. Specifically, the distance between each focus feature point and other focus feature points can be calculated, and the distance The two shortest focus feature points are used as adjacent focus feature points, and then the adjacent focus feature points are connected with straight lines to obtain a closed area, that is, the focus area of the video source data; according to the same method, corresponding to the captured image
- the focus area of the captured image can be obtained by processing the multiple focus feature points, so as to find the approximate focus area from the two images.
- a plurality of disjoint high spatial frequency regions can be generated simultaneously in order to obtain a more accurate focusing result.
- Step 132 Acquire the sharpness of the focus area in the video source data, denoted as the first sharpness, obtain the sharpness of the focus area in the captured image, and record it as the second sharpness; based on the first sharpness and the second sharpness, determine The adjustment direction of the projection lens.
- the image sharpness evaluation function can be used to calculate the first sharpness and the second sharpness, and then calculate the difference between the first sharpness and the second sharpness, and the difference of sharpness can be used to measure the focus situation and the defocus distance, the larger the difference between the two, the larger the defocus distance; specifically, the parameters corresponding to each high spatial frequency region that can indicate high frequency information can be calculated at the same time, and the parameter is used as the clarity, for example , the high frequency part of the spatial spectrum, the sum of the squares of the gradient values, or the sum of the absolute values of the Laplacian operator; in addition, in order to ensure comparison between different images, the difference between the two can also be calculated for area or total brightness and other parameters for normalization; if there are multiple high spatial frequency regions, each can be normalized according to the area and then averaged to obtain the final result.
- the final defocus distance in order to improve the stability of the system and obtain a stable defocus distance, can be filtered, for example, an average value filter can be used or the average value of the maximum value and the minimum value can be removed. filtering, etc.
- the defocusing distance increases after the last adjustment, it means that the defocusing direction is opposite to the last adjustment direction; If the defocus distance decreases after the last adjustment, it means that the defocus direction is the same as the last adjustment direction; after judging the defocus direction, a single fine-tuning or multiple feedback fine-tuning can be performed until the first
- the difference between the sharpness and the second sharpness is less than or equal to the preset defocusing threshold, and the preset distance is smaller than the preset defocusing threshold, that is, the threshold of the defocusing degree is greater than the defocusing caused by moving the preset distance to prevent adjustment.
- the optimal focusing position is exceeded; if the difference between the first sharpness and the second sharpness is less than or equal to the preset defocusing threshold, it is determined that the focusing is successful, and no adjustment is required.
- the matching area between the video source data and the captured image can be calculated according to the obtained feature points, and the corresponding focus feature points can be obtained by processing the pixels in the matching area; based on the distribution of the two groups of focus feature points, by connecting The approximate focus area can be obtained from adjacent focus feature points; according to the obtained focus area, the image sharpness evaluation function can be used to calculate the sharpness of the focus area respectively. If the focus is inaccurate, it will start feedback adjustment, and it can analyze how to complete the focus according to the extracted focus area, and realize automatic focus.
- FIG. 5 is a schematic structural diagram of an embodiment of a real-time focusing device provided by the present application.
- the real-time focusing device 50 includes a memory 51 and a processor 52 that are connected to each other.
- the memory 51 is used for storing computer programs.
- the controller 52 is executed, it is used to realize the above-mentioned real-time focusing method.
- This embodiment provides a reliable real-time focusing device 50, which can solve the problem that when the projection device is out of focus, the user does not need to manually determine and control the focus, but can automatically determine the out-of-focus situation.
- a reliable real-time focusing device 50 which can solve the problem that when the projection device is out of focus, the user does not need to manually determine and control the focus, but can automatically determine the out-of-focus situation.
- FIG. 6 is a schematic structural diagram of an embodiment of a projection system provided by the present application.
- the projection system includes: a real-time focusing device 61, a projection device 62, and a camera device 63.
- the projection device 62 includes a projection lens 621 and a projection screen 622.
- the imaging device 63 may be a camera.
- the calibration parameters include: the distortion parameters of the projection device 62 and the camera device 63 Distortion parameters , the relative position and relative direction of the projection device 62 and the camera device 63 and other information.
- the corresponding relationship between the projection pixels and the camera pixels can be roughly confirmed.
- the distance of the projection screen 622 is not determined, the corresponding relationship between the projection area and the photographing area cannot be determined even after the internal and external parameters are calibrated.
- the coordinates of each corner point on the camera negative will be described below on how to determine the corresponding relationship.
- two adjacent frames of video source data can be down-sampled and made difference, and statistics for each The difference value of each area, and the area with a large difference between the two frames is used as the characteristic area.
- the actual display image corresponding to the two frames of video source data is made difference, and the characteristic area in the video source data is detected in the actual display image. position (for example, using the cross-correlation function of the sliding window), the correspondence between the projected pixels and the camera pixels can be confirmed.
- the real-time focusing device 61 provided in this embodiment can be used for adjustment to realize focusing.
- the real-time focusing device 61 is used to obtain video source data; the projection device 62 is connected to the real-time focusing device 61, which is used to receive the video source data sent by the real-time focusing device 61, perform projection display, and form a projection display image; specifically, the projection lens 621
- the focusing position of the camera is controlled by the real-time focusing device 61 .
- the projection lens 621 can project it out in real time to form a captured image on the projection screen 622 .
- the camera device 63 is connected to the real-time focusing device 61, which is used for shooting the projection display image displayed by the projection device 62 to obtain the captured image corresponding to the video source data; After the synchronous shooting signal is received, the projection display image generated by the projection device 62 is captured to obtain the captured image, and the captured image is sent back to the real-time focusing device 61 .
- the real-time focusing device 61 is also used to process the video source data and the captured image to obtain the feature information of the video source data and the feature information of the captured image; based on the feature information of the video source data and the feature information of the captured image, a control instruction is generated to The control command is sent to the motor (not shown in the figure), so that the motor drives the projection lens 621 to move to achieve focusing; further, the position of the projection lens 621 or the parameters of the projection lens 621 can be adjusted, and this embodiment only adjusts the projection lens The position of 621 is taken as an example for description.
- the projection system further includes a video source 64, and the video source 64 is used to send the video source data to the synchronization module 6111;
- the real-time focusing device 61 includes a processor 611 and a memory 612 that are connected to each other.
- the processor 611 is used to receive the video source data;
- the memory 612 is used to receive the video source data sent by the processor 611 and store the video source data, and the processor 611 can also obtain the video source data from the memory 612 and receive the camera device 63 sent captured images.
- the synchronization module 6111 is connected to the memory 612 and the video source 64 , and is used for receiving the video source data and the captured image sent by the camera 63 , and storing the video source data in the memory 612 .
- the feature extraction module 6112 is connected to the synchronization module 6111, and is used for processing the received video source data and the captured image to obtain multiple feature points in the video source data and multiple feature points in the captured image.
- the focus decision module 6113 is connected with the feature extraction module 6112, which is used to determine the adjustment direction of the projection lens 621 based on the multiple feature points of the video source data and the multiple feature points of the captured image, and generate a control instruction corresponding to the adjustment direction, and Control commands are sent to the motor.
- the synchronization module 6111 can cache the image projected by the projection device 62, and after receiving the picture captured by the camera device 63, the corresponding video source data is stored from the memory. Take out in 612, and send the captured image and video source data to the feature extraction module 6112.
- the feature extraction module 6112 can use a specific image processing algorithm or a deep learning algorithm to convert the input image into feature information related to the focus intensity, and the The feature information is sent to the focus decision module 6113, and the focus decision module 6113 can analyze the focus situation and control the projection device 62 to perform focus adjustment.
- the projection system includes a synchronization module 6111 , a feature extraction module 6112 and a focus decision module 6113 .
- the synchronization module 6111 is responsible for buffering the video source data sent by the video source 64 , and controls the camera 63 to Capture the image at the correct time, generate the captured image, and pass the video source data and its corresponding captured image to the feature extraction module 6112; the feature extraction module 6112 extracts and matches the feature information that does not depend on ambient light from the two synchronized images;
- the focus decision module 6113 calculates the focus area according to the input feature information, and calculates the sharpness of the focus area in the video source data and the sharpness (sharpness) of the focus area in the captured image, thereby judging the adjustment direction of the projection lens 621, and the output control
- the command is sent to the motor, and the motor can drive the projection lens 621 to move under the control of the control command to realize real-time non-sensing focusing, which can
- FIG. 8 is a schematic structural diagram of an embodiment of a computer-readable storage medium provided by the present application.
- the computer-readable storage medium 80 is used to store a computer program 81, and when the computer program 81 is executed by a processor, it is used to realize The real-time focusing method in the above embodiment.
- the computer-readable storage medium 80 can be a server, a U disk, a mobile hard disk, a read-only memory (ROM, Read-Only Memory), a random access memory (RAM, Random Access Memory), a magnetic disk or an optical disk, etc. medium of program code.
- the disclosed method and device may be implemented in other manners.
- the device implementations described above are only illustrative.
- the division of modules or units is only a logical function division.
- there may be other divisions for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
- Units described as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this implementation manner.
- each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
- the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Projection Apparatus (AREA)
- Transforming Electric Information Into Light Information (AREA)
- Studio Devices (AREA)
Abstract
Disclosed are a real-time focusing method, apparatus and system, and a computer-readable storage medium. The method comprises: acquiring video source data and a captured image; processing the video source data and the captured image, so as to obtain feature information of the video source data and feature information of the captured image; and on the basis of the feature information of the video source data and the feature information of the captured image, generating a control instruction, and sending the control instruction to an electric motor, such that the electric motor drives a projection lens to move, thereby realizing focusing. In this way, automatic real-time focusing can be realized by means of the present application.
Description
本申请涉及投影技术领域,具体涉及一种实时对焦方法、装置、系统和计算机可读存储介质。The present application relates to the field of projection technology, and in particular, to a real-time focusing method, device, system, and computer-readable storage medium.
自动对焦是投影机非常重要的功能,使得投影机可以适应于更多的使用场景,具体的需求可以细化为如下两个方面:在首次安装或移动位置之后重新设置对焦面;在投影机长期开启的状态下,会由于产热而发生温度变化,镜头和光路各个部件在温度变化下会产生热失焦现象;另外,在特定工况下也可能会由于震动产生错位,进而导致模糊,这类实时的模糊现象无法被目前的自动对焦技术矫正。传统的对焦方案需要暂停显示并且打出特定的参考图进行对焦,无法实现无感对焦;此外,传统的对焦方案还可以通过设置运动传感器来调整对焦马达的角度,然而由于运动传感器的误差和噪声都较大、投影机的运动以及投影机与屏幕的相对关系比较复杂,无法得到较好的效果,且无法调整热失焦带来的模糊。Autofocus is a very important function of the projector, so that the projector can be adapted to more usage scenarios. The specific needs can be refined into the following two aspects: reset the focus surface after the first installation or move the position; In the open state, temperature changes will occur due to heat generation, and various components of the lens and optical path will cause thermal defocusing phenomenon under temperature changes; Real-time-like blurring cannot be corrected by current autofocus technology. The traditional focusing scheme needs to suspend the display and focus on a specific reference image, which cannot achieve non-sensing focusing; in addition, the traditional focusing scheme can also adjust the angle of the focusing motor by setting the motion sensor, but due to the error and noise of the motion sensor. Larger, the movement of the projector and the relative relationship between the projector and the screen are more complicated, and it is impossible to get better results, and it is impossible to adjust the blur caused by thermal defocus.
发明内容SUMMARY OF THE INVENTION
本申请提供一种实时对焦方法、装置、系统和计算机可读存储介质,能够实现自动实时对焦。The present application provides a real-time focusing method, device, system and computer-readable storage medium, which can realize automatic real-time focusing.
为解决上述技术问题,本申请采用的技术方案是提供一种实时对焦方法,该方法包括:获取视频源数据与拍摄图像;对视频源数据与拍摄图像进行处理,得到视频源数据的特征信息与拍摄图像的特征信息;基于视频源数据的特征信息与拍摄图像的特征信息,产生控制指令,将控制指令发送至电机,以使得电机驱动投影镜头移动,实现对焦。In order to solve the above-mentioned technical problems, the technical solution adopted in the present application is to provide a real-time focusing method, the method includes: acquiring video source data and captured images; The feature information of the captured image; based on the feature information of the video source data and the feature information of the captured image, a control command is generated, and the control command is sent to the motor, so that the motor drives the projection lens to move and realize focusing.
为解决上述技术问题,本申请采用的另一技术方案是提供一种实时对 焦装置,该实时对焦装置包括互相连接的存储器和处理器,其中,存储器用于存储计算机程序,计算机程序在被处理器执行时,用于实现上述的实时对焦方法。In order to solve the above-mentioned technical problems, another technical solution adopted in this application is to provide a real-time focusing device, the real-time focusing device includes a memory and a processor that are connected to each other, wherein the memory is used to store a computer program, and the computer program is used by the processor. When executed, it is used to implement the above-mentioned real-time focusing method.
为解决上述技术问题,本申请采用的另一技术方案是提供一种投影系统,该投影系统包括:实时对焦装置、投影装置以及摄像装置,实时对焦装置用于接收视频源数据;投影装置与实时对焦装置连接,用于接收实时对焦装置发送的视频源数据,进行投影显示,其中,投影装置包括互相连接的投影镜头与电机;摄像装置与实时对焦装置连接,用于对投影装置显示的投影显示图像进行拍摄,得到与视频源数据对应的拍摄图像;其中,实时对焦装置还用于对视频源数据与拍摄图像进行处理,得到视频源数据的特征信息与拍摄图像的特征信息;基于视频源数据的特征信息与拍摄图像的特征信息,产生控制指令,将控制指令发送至电机,以使得电机驱动投影镜头移动,实现对焦。In order to solve the above technical problem, another technical solution adopted in the present application is to provide a projection system, the projection system includes: a real-time focusing device, a projection device and a camera device, the real-time focusing device is used for receiving video source data; The focusing device is connected to receive the video source data sent by the real-time focusing device and perform projection display, wherein the projection device includes a projection lens and a motor that are connected to each other; The image is shot to obtain a shot image corresponding to the video source data; wherein, the real-time focusing device is also used to process the video source data and the shot image to obtain feature information of the video source data and feature information of the shot image; based on the video source data The feature information and the feature information of the captured image are generated, and the control command is sent to the motor, so that the motor drives the projection lens to move and realize focusing.
为解决上述技术问题,本申请采用的另一技术方案是提供一种计算机可读存储介质,该计算机可读存储介质用于存储计算机程序,计算机程序在被处理器执行时,用于实现上述的实时对焦方法。In order to solve the above-mentioned technical problem, another technical solution adopted in this application is to provide a computer-readable storage medium, the computer-readable storage medium is used to store a computer program, and when the computer program is executed by a processor, it is used to realize the above-mentioned Live focus method.
通过上述方案,本申请的有益效果是:对接收的视频源数据进行存储,控制投影装置显示该视频源数据,以得到与该视频源数据对应的拍摄图像;然后对视频源数据与拍摄图像进行处理,得到相应的特征信息;利用视频源数据的特征信息与拍摄图像的特征信息,可确定投影镜头的调整方向;然后根据调整方向生成相应的控制指令,以控制电机带动投影镜头移动,能够在播放画面的同时进行对焦,无需人工调焦,可以实现实时对焦与无感对焦,且可对热失焦进行补正。Through the above scheme, the beneficial effects of the present application are: the received video source data is stored, and the projection device is controlled to display the video source data to obtain a captured image corresponding to the video source data; then the video source data and the captured image are processed. processing to obtain the corresponding feature information; using the feature information of the video source data and the feature information of the captured image, the adjustment direction of the projection lens can be determined; then according to the adjustment direction, the corresponding control instructions are generated to control the motor to drive the projection lens to move, which can be Focusing is performed at the same time as playing the screen, without manual focusing, real-time focusing and non-sensing focusing can be achieved, and thermal defocus can be corrected.
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。其中:In order to illustrate the technical solutions in the embodiments of the present application more clearly, the following briefly introduces the drawings that are used in the description of the embodiments. Obviously, the drawings in the following description are only some embodiments of the present application. For those of ordinary skill in the art, other drawings can also be obtained from these drawings without creative effort. in:
图1是本申请提供的实时对焦方法一实施例的流程示意图;1 is a schematic flowchart of an embodiment of a real-time focusing method provided by the present application;
图2是本申请提供的实时对焦方法另一实施例的流程示意图;2 is a schematic flowchart of another embodiment of a real-time focusing method provided by the present application;
图3是图2所示的实施例中步骤131的流程示意图;FIG. 3 is a schematic flowchart of step 131 in the embodiment shown in FIG. 2;
图4是图2所示的实施例中拍摄图像的示意图;4 is a schematic diagram of a captured image in the embodiment shown in FIG. 2;
图5是本申请提供的实时对焦装置一实施例的结构示意图;5 is a schematic structural diagram of an embodiment of a real-time focusing device provided by the present application;
图6是本申请提供的投影系统一实施例的结构示意图;6 is a schematic structural diagram of an embodiment of a projection system provided by the present application;
图7是图6所示的实施例中视频源与实时对焦装置的连接示意图;Fig. 7 is the connection schematic diagram of the video source and the real-time focusing device in the embodiment shown in Fig. 6;
图8是本申请提供的计算机可读存储介质一实施例的结构示意图。FIG. 8 is a schematic structural diagram of an embodiment of a computer-readable storage medium provided by the present application.
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性的劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. Obviously, the described embodiments are only a part of the embodiments of the present application, but not all of the embodiments. Based on the embodiments in the present application, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
本申请提供一种实时对焦方法,该方法包括:The application provides a real-time focusing method, the method includes:
获取视频源数据与拍摄图像;Obtain video source data and capture images;
对视频源数据与拍摄图像进行处理,得到视频源数据的特征信息与拍摄图像的特征信息;The video source data and the captured image are processed to obtain the feature information of the video source data and the feature information of the captured image;
基于视频源数据的特征信息与拍摄图像的特征信息,产生控制指令,将控制指令发送至电机,以使得电机驱动投影镜头移动,实现对焦。Based on the feature information of the video source data and the feature information of the captured image, a control command is generated, and the control command is sent to the motor, so that the motor drives the projection lens to move and realize focusing.
下面结合具体的实施例进行说明:Described below in conjunction with specific embodiments:
请参阅图1,图1是本申请提供的实时对焦方法一实施例的流程示意图,该方法包括:Please refer to FIG. 1. FIG. 1 is a schematic flowchart of an embodiment of a real-time focusing method provided by the present application. The method includes:
步骤11:获取视频源数据与拍摄图像。Step 11: Acquire video source data and capture images.
该视频源数据可以为图像的像素值,在不同时间段可接收到不同的视频源数据,可实时在线获取该视频源数据、接收其他装置发送的视频源数据或者从存储设备中读取视频源数据;为了方便后续与拍摄图像进行比对,可将该视频源数据存储在实时对焦装置中。可以理解地,实时对焦装置中 存储的视频源数据的数量比较小,且可被定期清理,可实时获取当前视频源数据前后预设时间内的视频源信号,比如,获取5秒内的视频源数据,对其进行缓存。The video source data can be the pixel value of the image, different video source data can be received in different time periods, the video source data can be acquired online in real time, the video source data sent by other devices can be received, or the video source can be read from a storage device data; in order to facilitate the subsequent comparison with the captured images, the video source data can be stored in the real-time focusing device. It can be understood that the amount of video source data stored in the real-time focusing device is relatively small, and can be cleaned regularly, and the video source signals within a preset time before and after the current video source data can be obtained in real time, for example, the video source within 5 seconds can be obtained. data, which is cached.
进一步地,可接收视频源发送的视频源数据,然后将该视频源数据发送至投影装置,并控制投影装置显示该视频源数据,生成投影显示图像;在投影装置开始显示后,可发送同步拍摄信号至摄像装置,以控制摄像装置对投影屏幕上的投影显示图像进行拍摄,从而得到与该视频源数据对应的拍摄图像,该拍摄图像与视频源数据至少部分内容相同;另外,可实时对焦装置从取出被缓存的同一帧视频源数据,以便将该视频源数据与拍摄图像进行比对。Further, the video source data sent by the video source can be received, and then the video source data can be sent to the projection device, and the projection device can be controlled to display the video source data to generate a projection display image; after the projection device starts to display, the synchronous shooting can be sent. The signal is sent to the camera device to control the camera device to shoot the projected display image on the projection screen, so as to obtain the captured image corresponding to the video source data, and the captured image is at least partially the same as the video source data; in addition, the real-time focusing device Take out the video source data of the same frame that is buffered, so as to compare the video source data with the captured image.
在其他实施例中,还可将接收到的视频源数据全部缓存,然后从存储的视频源数据中取出一组视频源数据以进行投影显示。In other embodiments, all the received video source data may also be buffered, and then a set of video source data is retrieved from the stored video source data for projection display.
可以理解地,获取视频源数据可以每一帧都进行,也可以每隔若干帧或者若干秒进行一次。It can be understood that the acquisition of video source data may be performed every frame, or may be performed every several frames or several seconds.
步骤12:对视频源数据与拍摄图像进行处理,得到视频源数据的特征信息与拍摄图像的特征信息。Step 12: Process the video source data and the captured image to obtain feature information of the video source data and feature information of the captured image.
可根据拍摄图像的模糊程度来判断是否需要进行对焦,但是如果仅根据拍摄图像的模糊程度来判断,难以判别是否是虚焦导致的模糊所导致的显示模糊,因此可将拍摄图像与视频源数据进行对比;具体地,可以仅将拍摄图像的一部分区域与视频源数据的一部分数据进行对比,即采用局部对比方式,或者将拍摄图像的全部区域与视频源数据的全部数据进行对比,即采用全局对比方式;如果视频源数据清晰且拍摄图像清晰,则无需进行对焦调节;如果视频源数据清晰而拍摄图像模糊,则需要重新对焦;进一步地,在获取到拍摄图像后,可利用图像处理方法对视频源数据与拍摄图像进行分析处理,分别提取出视频源数据的特征信息与拍摄图像的特征信息。Whether it is necessary to focus can be judged according to the degree of blurring of the captured image, but if it is only determined by the degree of blurring of the captured image, it is difficult to determine whether the display is blurred due to blurring caused by defocusing. Therefore, the captured image and video source data can be compared. Contrast; specifically, only a part of the captured image can be compared with a part of the video source data, that is, a local comparison method is used, or the entire region of the captured image can be compared with all the data of the video source data, that is, a global comparison is used. Comparison method; if the video source data is clear and the captured image is clear, no focus adjustment is required; if the video source data is clear but the captured image is blurred, refocusing is required; further, after the captured image is acquired, image processing methods can be used to adjust the focus. The video source data and the captured image are analyzed and processed, and the feature information of the video source data and the feature information of the captured image are extracted respectively.
可以理解地,模糊与清晰是相对而言的,如果视频源数据与拍摄图像的清晰度的差值在一范围以内,则可判定二者同样清晰或者模糊,否则可判定拍摄图像模糊。It can be understood that blurring and clarity are relative terms. If the difference between the sharpness of the video source data and the captured image is within a range, it can be determined that the two are equally clear or blurred; otherwise, the captured image can be determined to be blurred.
在一具体的实施例中,该特征信息包括特征点,可利用特定的图像处理算法或深度学习算法对视频源数据与拍摄图像进行处理,分别得到视频源数据中的多个特征点与拍摄图像中的多个特征点;例如,可使用特征提取算法提取出图像中的特征点。In a specific embodiment, the feature information includes feature points, and a specific image processing algorithm or a deep learning algorithm can be used to process the video source data and the captured image to obtain a plurality of feature points and captured images in the video source data respectively. multiple feature points in the image; for example, feature points in the image can be extracted using a feature extraction algorithm.
步骤13:基于视频源数据的特征信息与拍摄图像的特征信息,产生控制指令,将控制指令发送至电机,以使得电机驱动投影镜头移动,实现对焦。Step 13: Based on the feature information of the video source data and the feature information of the captured image, a control command is generated, and the control command is sent to the motor, so that the motor drives the projection lens to move and realize focusing.
投影装置包括互相连接的投影镜头与电机,在生成了特征信息后,可利用该特征信息生成控制指令,然后将该控制指令发送至电机处,使得电机带动投影镜头移动,以实现调整焦距和聚焦位置,从而实现对焦。The projection device includes a projection lens and a motor that are connected to each other. After the feature information is generated, a control command can be generated by using the feature information, and then the control command is sent to the motor, so that the motor drives the projection lens to move, so as to adjust the focal length and focus. position to achieve focus.
在一具体的实施例中,以调整投影镜头的位置为例进行说明,具体包括以下步骤:In a specific embodiment, the adjustment of the position of the projection lens is taken as an example for description, which specifically includes the following steps:
步骤131:基于视频源数据的特征信息与拍摄图像的特征信息,确定相应的对焦区域。Step 131 : Determine the corresponding focus area based on the feature information of the video source data and the feature information of the captured image.
在提取出视频源数据的特征点与拍摄图像的特征点之后,可利用这些特征点去确定对焦区域,采用图3所示的方式进行处理,具体包括以下步骤:After the feature points of the video source data and the feature points of the captured image are extracted, these feature points can be used to determine the focus area, and the method shown in Figure 3 is used for processing, which specifically includes the following steps:
步骤1311:将视频源数据中的多个特征点与拍摄图像中的多个特征点进行匹配,确定相应的匹配区域。Step 1311 : Match multiple feature points in the video source data with multiple feature points in the captured image to determine a corresponding matching area.
比对是将拍摄图像和源视频数据进行对比,但是由于摄像装置的位置或视角的影响,使得利用摄像装置对投影装置显示的画面进行拍摄时,可能导致拍摄到的画面(即拍摄图像)和实际的投影画面(即投影显示图像)不完全相同,此时拍摄图像无法表示实际的投影画面,因而为了方便进行对比,可先将视频源数据与拍摄图像进行匹配;进一步地,可对比视频源数据与拍摄图像的局部,找到与该拍摄图像的至少部分区域对应的视频源数据,建立起视频源数据和拍摄图像的对应关系;例如,如图4所示,拍摄图像记作A,拍摄图像A中的局部区域记作B,将局部区域B中像素值与视频源数据进行匹配,找到视频源数据中与局部区域B中的像素值差异最小的像素值,这些像素值组成的区域即为与局部区域B相匹配的区域, 即匹配区域。The comparison is to compare the captured image with the source video data. However, due to the influence of the position or viewing angle of the camera, when the camera is used to shoot the picture displayed by the projection device, the captured picture (ie, the captured image) may be different from The actual projected images (ie, projected display images) are not exactly the same, and the captured images cannot represent the actual projected images. Therefore, in order to facilitate comparison, the video source data can be matched with the captured images first; further, the video sources can be compared. The data and the part of the captured image, find the video source data corresponding to at least part of the captured image, and establish the corresponding relationship between the video source data and the captured image; for example, as shown in Figure 4, the captured image is recorded as A, and the captured image The local area in A is denoted as B, and the pixel values in the local area B are matched with the video source data, and the pixel value in the video source data with the smallest difference between the pixel values in the local area B is found. The area composed of these pixel values is The area that matches the local area B is the matching area.
进一步地,可在匹配之前,先去除拍摄到的拍摄图像的畸变,然后找到图像的角点或者其他特征点,根据这些角点或特征点在视频源数据和拍摄到的拍摄图像中的位置对应关系,找到视频源数据和拍摄图像的匹配区域。Further, before the matching, the distortion of the captured image can be removed, and then the corners or other feature points of the image can be found, and the positions of these corners or feature points in the video source data and the captured image correspond to relationship to find the matching area between the video source data and the captured image.
在一具体的实施例中,为了防止环境光影响到图像处理的过程,可以在对拍摄图像去畸变之后进行环境光减除处理;在环境光较弱或者环境光均匀的情形下,也可不进行环境光减除处理;或者也可以利用不同两帧之间的差值实现环境光减除,如果两帧的差距小于某一设定的阈值,也可不进行环境光减除处理,比如,显示静止画面时不进行环境光减除处理。In a specific embodiment, in order to prevent the ambient light from affecting the image processing process, the ambient light subtraction process may be performed after the captured image is de-distorted; in the case of weak ambient light or uniform ambient light, the ambient light reduction process may not be performed. Ambient light subtraction processing; or the difference between two different frames can be used to achieve ambient light subtraction. If the difference between two frames is less than a certain threshold, ambient light subtraction processing is not required, for example, the display is still Ambient light subtraction processing is not performed on the screen.
步骤1312:分别对每个匹配区域的像素进行处理,得到多个对焦特征点。Step 1312: Process the pixels of each matching area respectively to obtain a plurality of focus feature points.
在匹配了之后,为了进一步决定用图像中的哪个局部来进行对比,可从匹配区域中选取一个或多个特定的区域,但是如果找到的区域中没有比较锐利的图像,需要再利用一些特殊的算法进行处理,对算法的要求更高,处理时间较长,因此可以去寻找匹配区域中最锐利的区域,即高空间频率区域,该高空间频率区域包括多个对焦特征点。After matching, in order to further decide which part of the image to use for comparison, one or more specific areas can be selected from the matching area, but if there is no sharp image in the found area, you need to use some special The algorithm has higher requirements on the algorithm and longer processing time. Therefore, it is possible to find the sharpest area in the matching area, that is, the high spatial frequency area, and the high spatial frequency area includes multiple focus feature points.
进一步地,可采用梯度算子或拉普拉斯算子或其他边缘提取算子对视频源数据中的匹配区域与拍摄图像中的匹配区域进行处理,比如,提取出匹配区域中变化剧烈的像素,得到相应的多个高频像素点;具体地,像素值变化激烈的位置对应图像中的高频信号区域(即高空间频率区域),比如边缘;像素值变化不大的位置对于低频信号区域,比如大片色块区;由于高频信号区域一般是图像的边缘或轮廓,更能表示图像的轮廓信息,因而选择高频信号区域来判定对焦情况。Further, a gradient operator, a Laplacian operator or other edge extraction operators can be used to process the matching area in the video source data and the matching area in the captured image, for example, to extract pixels that change drastically in the matching area. , to obtain a plurality of corresponding high-frequency pixel points; specifically, the position where the pixel value changes violently corresponds to the high-frequency signal area (ie, the high spatial frequency area) in the image, such as the edge; the position where the pixel value changes little is for the low-frequency signal area. , such as a large color block area; because the high-frequency signal area is generally the edge or contour of the image, which can better represent the contour information of the image, so the high-frequency signal area is selected to determine the focus situation.
然后可统计每个匹配区域的像素变化情况,并通过设置的阈值进行筛选,将富含剧烈变化像素的区域作为高空间频率区域,即判断每个高频像素点的像素值是否大于预设像素值;若该高频像素点的像素值大于预设像素值,则将高频像素点作为对焦特征点;若该高频像素点的像素值小于或等于预设像素值,则将该高频像素点丢弃。Then, the pixel changes of each matching area can be counted, and filtered by the set threshold, and the area rich in sharply changed pixels is regarded as a high spatial frequency area, that is, it is determined whether the pixel value of each high frequency pixel is greater than the preset pixel. value; if the pixel value of the high-frequency pixel point is greater than the preset pixel value, the high-frequency pixel point is used as the focus feature point; if the pixel value of the high-frequency pixel point is less than or equal to the preset pixel value, the high-frequency pixel point is used. Pixels are discarded.
步骤1313:分别将多个对焦特征点中相邻的对焦特征点进行连接,得到相应的对焦区域。Step 1313: Connect the adjacent focus feature points among the multiple focus feature points respectively to obtain the corresponding focus area.
在获取到对焦特征点后,可将视频源数据对应的多个对焦特征中相邻的对焦特征点连接,具体地,可计算每个对焦特征点与其他对焦特征点之间的距离,将距离最短的两个对焦特征点作为相邻的对焦特征点,然后将相邻的对焦特征点用直线连接起来,得到一个闭合的区域,即视频源数据的对焦区域;按照同样的方法对拍摄图像对应的多个对焦特征点进行处理,可得到拍摄图像的对焦区域,从而实现从这两幅图像中找出近似的对焦区域。After acquiring the focus feature points, the adjacent focus feature points among the multiple focus features corresponding to the video source data can be connected. Specifically, the distance between each focus feature point and other focus feature points can be calculated, and the distance The two shortest focus feature points are used as adjacent focus feature points, and then the adjacent focus feature points are connected with straight lines to obtain a closed area, that is, the focus area of the video source data; according to the same method, corresponding to the captured image The focus area of the captured image can be obtained by processing the multiple focus feature points, so as to find the approximate focus area from the two images.
可以理解地,可以同时生成多个不相交的高空间频率区域,以便得到更加精确的对焦结果。Understandably, a plurality of disjoint high spatial frequency regions can be generated simultaneously in order to obtain a more accurate focusing result.
步骤132:获取视频源数据中对焦区域的清晰度,记作第一清晰度,获取拍摄图像中对焦区域的清晰度,记作第二清晰度;基于第一清晰度与第二清晰度,确定投影镜头的调整方向。Step 132: Acquire the sharpness of the focus area in the video source data, denoted as the first sharpness, obtain the sharpness of the focus area in the captured image, and record it as the second sharpness; based on the first sharpness and the second sharpness, determine The adjustment direction of the projection lens.
在获取到对焦区域之后,可利用图像清晰度评价函数计算第一清晰度与第二清晰度,再计算第一清晰度与第二清晰度的差值,清晰度的差值可以用来衡量对焦情况以及离焦距离,二者的差值越大,离焦距离越大;具体地,可同时计算每个高空间频率区域对应的可以指示高频信息的参数,将该参数作为清晰度,例如,空间频谱的高频部分、梯度值的平方和或拉普拉斯算子的绝对值的和;此外,为了保证在不同图像之间进行比较,还可对二者的差异针对面积或总亮度等参数进行归一化;如果存在多个高空间频率区域,可以在各自根据面积归一化后再求平均值,得到最终的结果。After acquiring the focus area, the image sharpness evaluation function can be used to calculate the first sharpness and the second sharpness, and then calculate the difference between the first sharpness and the second sharpness, and the difference of sharpness can be used to measure the focus situation and the defocus distance, the larger the difference between the two, the larger the defocus distance; specifically, the parameters corresponding to each high spatial frequency region that can indicate high frequency information can be calculated at the same time, and the parameter is used as the clarity, for example , the high frequency part of the spatial spectrum, the sum of the squares of the gradient values, or the sum of the absolute values of the Laplacian operator; in addition, in order to ensure comparison between different images, the difference between the two can also be calculated for area or total brightness and other parameters for normalization; if there are multiple high spatial frequency regions, each can be normalized according to the area and then averaged to obtain the final result.
在一具体的实施例中,为了提高系统的稳定性,获取一个稳定的离焦距离,可对最终的离焦距离进行滤波,例如,可采用平均值滤波或去掉最大值与最小值的平均值滤波等。In a specific embodiment, in order to improve the stability of the system and obtain a stable defocus distance, the final defocus distance can be filtered, for example, an average value filter can be used or the average value of the maximum value and the minimum value can be removed. filtering, etc.
然后可判断第一清晰度与第二清晰度的差值是否大于预设离焦阈值;若第一清晰度与第二清晰度的差值大于预设离焦阈值,则进行反馈调节,可控制投影镜头向任意方向移动预设距离,并继续判断离焦情况,即返回步骤11,如果在进行完上一次调节后,离焦距离增大,则表示离焦方向与 上一次的调节方向相反;如果在进行完上一次调节后,离焦距离减小,则表示离焦方向与上一次的调节方向相同;在判断了离焦方向后,可进行单次微调或多次反馈微调,直至第一清晰度与第二清晰度的差值小于或等于预设离焦阈值,该预设距离小于预设离焦阈值,即离焦程度的阈值大于移动预设距离所引起的离焦,以防止调整预设距离后导致越过最佳调焦位置;若第一清晰度与第二清晰度的差值小于或等于预设离焦阈值,则确定对焦成功,无需进行调整。Then it can be determined whether the difference between the first sharpness and the second sharpness is greater than the preset defocus threshold; if the difference between the first sharpness and the second sharpness is greater than the preset defocus threshold, feedback adjustment is performed, and the control Move the projection lens to a preset distance in any direction, and continue to judge the defocusing situation, that is, return to step 11. If the defocusing distance increases after the last adjustment, it means that the defocusing direction is opposite to the last adjustment direction; If the defocus distance decreases after the last adjustment, it means that the defocus direction is the same as the last adjustment direction; after judging the defocus direction, a single fine-tuning or multiple feedback fine-tuning can be performed until the first The difference between the sharpness and the second sharpness is less than or equal to the preset defocusing threshold, and the preset distance is smaller than the preset defocusing threshold, that is, the threshold of the defocusing degree is greater than the defocusing caused by moving the preset distance to prevent adjustment. After the preset distance, the optimal focusing position is exceeded; if the difference between the first sharpness and the second sharpness is less than or equal to the preset defocusing threshold, it is determined that the focusing is successful, and no adjustment is required.
本实施例可先根据所得的特征点计算出视频源数据和拍摄图像的匹配区域,通过对匹配区域的像素进行处理,可得到相应的对焦特征点;基于两组对焦特征点的分布,通过连接相邻的对焦特征点可得到近似的对焦区域;根据所得到的对焦区域,可以使用图像清晰度评价函数分别计算对焦区域的清晰度,根据两个清晰度的差距可判定当前对焦是否精确,如果对焦不精确就开始反馈调节,能够根据提取的对焦区域分析如何完成对焦,实现自动对焦。In this embodiment, the matching area between the video source data and the captured image can be calculated according to the obtained feature points, and the corresponding focus feature points can be obtained by processing the pixels in the matching area; based on the distribution of the two groups of focus feature points, by connecting The approximate focus area can be obtained from adjacent focus feature points; according to the obtained focus area, the image sharpness evaluation function can be used to calculate the sharpness of the focus area respectively. If the focus is inaccurate, it will start feedback adjustment, and it can analyze how to complete the focus according to the extracted focus area, and realize automatic focus.
请参阅图5,图5是本申请提供的实时对焦装置一实施例的结构示意图,实时对焦装置50包括互相连接的存储器51和处理器52,存储器51用于存储计算机程序,计算机程序在被处理器52执行时,用于实现上述的实时对焦方法。Please refer to FIG. 5. FIG. 5 is a schematic structural diagram of an embodiment of a real-time focusing device provided by the present application. The real-time focusing device 50 includes a memory 51 and a processor 52 that are connected to each other. The memory 51 is used for storing computer programs. When the controller 52 is executed, it is used to realize the above-mentioned real-time focusing method.
本实施例提供了一种可靠的实时对焦装置50,可以解决在投影装置失焦情况发生时,用户无需通过人工判断和控制来对焦,而是可以自动判断失焦情况,在显示任意视频图像时能够实时自动对焦,从而不影响用户体验,而且可以实时校正由热失焦和震动等原因导致的失焦。This embodiment provides a reliable real-time focusing device 50, which can solve the problem that when the projection device is out of focus, the user does not need to manually determine and control the focus, but can automatically determine the out-of-focus situation. When displaying any video image Capable of real-time autofocus, so that the user experience is not affected, and defocus caused by thermal defocus and vibration can be corrected in real time.
请参阅图6,图6是本申请提供的投影系统一实施例的结构示意图,投影系统包括:实时对焦装置61、投影装置62以及摄像装置63,投影装置62包括投影镜头621与投影屏幕622,摄像装置63可以为相机。Please refer to FIG. 6. FIG. 6 is a schematic structural diagram of an embodiment of a projection system provided by the present application. The projection system includes: a real-time focusing device 61, a projection device 62, and a camera device 63. The projection device 62 includes a projection lens 621 and a projection screen 622. The imaging device 63 may be a camera.
可在实时对焦流程开始之前对投影镜头621与摄像装置63的内外参标定,并将标定参数保存在实时对焦装置61中,该标定参数包括:投影装置62的畸变参数、摄像装置63的畸变参数、投影装置62和摄像装置63的相对位置与相对方向等信息。Before the real-time focusing process starts, the internal and external parameters of the projection lens 621 and the camera device 63 can be calibrated, and the calibration parameters can be saved in the real-time focusing device 61. The calibration parameters include: the distortion parameters of the projection device 62 and the camera device 63 Distortion parameters , the relative position and relative direction of the projection device 62 and the camera device 63 and other information.
在标定结束后,即可大致确认投影像素和相机像素的对应关系,但由于投影屏幕622的距离没有确定,标定了内外参后依然无法确定投影区域和照相区域的对应关系,即投影区域的四个角点在相机底片上的坐标,将在下文对如何确定对应关系进行描述。After the calibration is completed, the corresponding relationship between the projection pixels and the camera pixels can be roughly confirmed. However, since the distance of the projection screen 622 is not determined, the corresponding relationship between the projection area and the photographing area cannot be determined even after the internal and external parameters are calibrated. The coordinates of each corner point on the camera negative will be described below on how to determine the corresponding relationship.
对于固定安装或使用场景较为固定的场合,可以在安装完成后,先将黑场和白场的画面相减得到投影区域,并利用传统的角点检测或直线检测算法(如霍夫变换)加上聚类算法(如K均值聚类算法)获取四个角点的坐标;在获取到投影镜头621和相机的内外参与四个角点坐标后,就可以在去畸变后利用四点变换获取投影像素坐标与相机像素坐标的对应关系;也可以利用两个相反的特定横纵数量的黑白格画面相减,再利用角点检测、直线检测或聚类算法获取更加精确的对位关系。For fixed installations or fixed usage scenarios, you can first subtract the black and white images to get the projection area after installation, and use traditional corner detection or line detection algorithms (such as Hough transform) to add The upper clustering algorithm (such as K-means clustering algorithm) obtains the coordinates of the four corners; after obtaining the coordinates of the four corners inside and outside the projection lens 621 and the camera, you can use the four-point transformation to obtain the projection after de-distortion The corresponding relationship between the pixel coordinates and the camera pixel coordinates; it is also possible to use two opposite black and white grid images of a specific horizontal and vertical number to subtract, and then use corner detection, line detection or clustering algorithms to obtain a more accurate alignment relationship.
对于使用场景更加复杂的情形,例如,投影镜头621与投影屏幕622的相对位置随时可能产生变化时,为了实现无感对焦,可以对相邻两帧视频源数据进行降采样并做差,统计每个区域的差异值大小,并将两帧差异较大的区域作为特征区域,同时对两帧视频源数据对应的实际显示画面做差,并且检测视频源数据中的特征区域在实际显示画面中的位置(如使用滑动窗口的互相关函数),即可确认投影像素和相机像素的对应关系。For more complex usage scenarios, for example, when the relative position of the projection lens 621 and the projection screen 622 may change at any time, in order to achieve non-sensing focusing, two adjacent frames of video source data can be down-sampled and made difference, and statistics for each The difference value of each area, and the area with a large difference between the two frames is used as the characteristic area. At the same time, the actual display image corresponding to the two frames of video source data is made difference, and the characteristic area in the video source data is detected in the actual display image. position (for example, using the cross-correlation function of the sliding window), the correspondence between the projected pixels and the camera pixels can be confirmed.
在确定了投影像素和相机像素的对应关系后,即确认了投影镜头621与投影屏幕622的距离,根据投影镜头621的设计参数,即可大致确定对焦位置的参数;然而由于热失焦等原因,无法确定精确的对焦参数,因此可使用本实施例所提供的实时对焦装置61来进行调整,实现对焦。After the correspondence between the projection pixels and the camera pixels is determined, the distance between the projection lens 621 and the projection screen 622 is confirmed, and the parameters of the focus position can be roughly determined according to the design parameters of the projection lens 621; however, due to thermal defocus and other reasons , the precise focusing parameters cannot be determined, so the real-time focusing device 61 provided in this embodiment can be used for adjustment to realize focusing.
实时对焦装置61用于获取视频源数据;投影装置62与实时对焦装置61连接,其用于接收实时对焦装置61发送的视频源数据,进行投影显示,形成投影显示图像;具体地,投影镜头621的对焦位置受到实时对焦装置61的控制,投影镜头621在接收到实时对焦装置61发送的视频源数据后,可实时投影出去,在投影屏幕622上形成拍摄图像。The real-time focusing device 61 is used to obtain video source data; the projection device 62 is connected to the real-time focusing device 61, which is used to receive the video source data sent by the real-time focusing device 61, perform projection display, and form a projection display image; specifically, the projection lens 621 The focusing position of the camera is controlled by the real-time focusing device 61 . After receiving the video source data sent by the real-time focusing device 61 , the projection lens 621 can project it out in real time to form a captured image on the projection screen 622 .
摄像装置63与实时对焦装置61连接,其用于对投影装置62显示的投影显示图像进行拍摄,得到与视频源数据对应的拍摄图像;具体地,摄像装置63可在接收到实时对焦装置61发送的同步拍摄信号后,拍摄投影装 置62生成的投影显示图像,得到拍摄图像,并将该拍摄图像传回实时对焦装置61。The camera device 63 is connected to the real-time focusing device 61, which is used for shooting the projection display image displayed by the projection device 62 to obtain the captured image corresponding to the video source data; After the synchronous shooting signal is received, the projection display image generated by the projection device 62 is captured to obtain the captured image, and the captured image is sent back to the real-time focusing device 61 .
实时对焦装置61还用于对视频源数据与拍摄图像进行处理,得到视频源数据的特征信息与拍摄图像的特征信息;基于视频源数据的特征信息与拍摄图像的特征信息,产生控制指令,将控制指令发送至电机(图中未示出),以使得电机驱动投影镜头621移动,实现对焦;进一步地,可调整投影镜头621的位置或者投影镜头621的参数,本实施例仅以调整投影镜头621的位置为例进行说明。The real-time focusing device 61 is also used to process the video source data and the captured image to obtain the feature information of the video source data and the feature information of the captured image; based on the feature information of the video source data and the feature information of the captured image, a control instruction is generated to The control command is sent to the motor (not shown in the figure), so that the motor drives the projection lens 621 to move to achieve focusing; further, the position of the projection lens 621 or the parameters of the projection lens 621 can be adjusted, and this embodiment only adjusts the projection lens The position of 621 is taken as an example for description.
在一具体的实施例中,如图6所示,投影系统还包括视频源64,视频源64用于发送视频源数据至同步模块6111;实时对焦装置61包括互相连接的处理器611与存储器612,处理器611用于接收视频源数据;存储器612用于接收处理器611发送的视频源数据,并对视频源数据进行存储,处理器611还可从存储器612中获取视频源数据以及接收摄像装置63发送的拍摄图像。In a specific embodiment, as shown in FIG. 6 , the projection system further includes a video source 64, and the video source 64 is used to send the video source data to the synchronization module 6111; the real-time focusing device 61 includes a processor 611 and a memory 612 that are connected to each other. , the processor 611 is used to receive the video source data; the memory 612 is used to receive the video source data sent by the processor 611 and store the video source data, and the processor 611 can also obtain the video source data from the memory 612 and receive the camera device 63 sent captured images.
同步模块6111与存储器612以及视频源64连接,其用于接收视频源数据与摄像装置63发送的拍摄图像,将视频源数据存储至存储器612。The synchronization module 6111 is connected to the memory 612 and the video source 64 , and is used for receiving the video source data and the captured image sent by the camera 63 , and storing the video source data in the memory 612 .
特征提取模块6112与同步模块6111连接,其用于对接收到的视频源数据与拍摄图像进行处理,得到视频源数据中的多个特征点与拍摄图像中的多个特征点。The feature extraction module 6112 is connected to the synchronization module 6111, and is used for processing the received video source data and the captured image to obtain multiple feature points in the video source data and multiple feature points in the captured image.
对焦决策模块6113与特征提取模块6112连接,其用于基于视频源数据的多个特征点与拍摄图像的多个特征点,确定投影镜头621的调整方向,产生与调整方向对应的控制指令,将控制指令发送至电机。The focus decision module 6113 is connected with the feature extraction module 6112, which is used to determine the adjustment direction of the projection lens 621 based on the multiple feature points of the video source data and the multiple feature points of the captured image, and generate a control instruction corresponding to the adjustment direction, and Control commands are sent to the motor.
进一步地,同步模块6111在控制投影装置62显示出视频源数据的同时,可对投影装置62投影的图像进行缓存,并在接收到摄像装置63拍摄的画面后,将对应的视频源数据从存储器612中取出,将拍摄图像与视频源数据一同送往特征提取模块6112,特征提取模块6112可以利用特定的图像处理算法或深度学习算法将输入的图像转化为与对焦强相关的特征信息,并将特征信息送入对焦决策模块6113,对焦决策模块6113可分析对焦情况并控制投影装置62进行对焦调节。Further, while controlling the projection device 62 to display the video source data, the synchronization module 6111 can cache the image projected by the projection device 62, and after receiving the picture captured by the camera device 63, the corresponding video source data is stored from the memory. Take out in 612, and send the captured image and video source data to the feature extraction module 6112. The feature extraction module 6112 can use a specific image processing algorithm or a deep learning algorithm to convert the input image into feature information related to the focus intensity, and the The feature information is sent to the focus decision module 6113, and the focus decision module 6113 can analyze the focus situation and control the projection device 62 to perform focus adjustment.
对焦决策模块6113可根据提取的对焦区域,分析如何完成对焦;具体地,先根据所得的特征点计算出视频源数据和拍摄图像的对焦区域,由于两组特征点在特征提取模块6112已经匹配过,所以可根据两组特征点的分布连接相邻的特征点,从两幅图像中得出近似的对焦区域;然后可以使用图像清晰度评价函数分别计算对焦区域的清晰度,根据清晰度的差距判断当前对焦是否精确,如果不精确则开始反馈调节,即控制投影镜头621向任意方向移动预设距离,并判断移动后的离焦情况,如果在完成上一次调节后离焦距离增大,则表示离焦方向与上一次的调节方向相反,否则离焦方向与上一次的调节方向相同;在判断了离焦方向后,可继续进行调节,直至离焦距离在可接受的范围以内。The focus decision module 6113 can analyze how to complete the focus according to the extracted focus area; specifically, first calculate the focus area of the video source data and the captured image according to the obtained feature points, because the two sets of feature points have been matched in the feature extraction module 6112. , so the adjacent feature points can be connected according to the distribution of the two sets of feature points, and the approximate focus area can be obtained from the two images; then the image sharpness evaluation function can be used to calculate the sharpness of the focus area separately, according to the difference in sharpness Determine whether the current focus is accurate, if not, start feedback adjustment, that is, control the projection lens 621 to move a preset distance in any direction, and determine the defocusing situation after the movement, if the defocusing distance increases after the last adjustment is completed, then Indicates that the defocusing direction is opposite to the last adjustment direction, otherwise the defocusing direction is the same as the last adjustment direction; after judging the defocusing direction, you can continue to adjust until the defocusing distance is within the acceptable range.
本实施例提出一种可实时对焦的投影系统,该投影系统包含同步模块6111、特征提取模块6112和对焦决策模块6113,同步模块6111负责缓存视频源64发送的视频源数据,控制摄像装置63在正确的时间拍摄图像,生成拍摄图像,并将视频源数据和其对应的拍摄图像传递给特征提取模块6112;特征提取模块6112从两幅同步的图像中提取匹配出不依赖环境光的特征信息;对焦决策模块6113根据输入的特征信息计算出对焦区域,同时计算视频源数据中对焦区域的清晰度和拍摄图像中对焦区域的清晰度(锐利程度),从而判断投影镜头621的调整方向,输出控制指令至电机,电机可在控制指令的控制下带动投影镜头621移动,实现实时无感的对焦,能够解决投影装置62在播放过程中出现的热失焦以及其他失焦情况,并提供给使用者较好的观看体验。This embodiment proposes a projection system that can focus in real time. The projection system includes a synchronization module 6111 , a feature extraction module 6112 and a focus decision module 6113 . The synchronization module 6111 is responsible for buffering the video source data sent by the video source 64 , and controls the camera 63 to Capture the image at the correct time, generate the captured image, and pass the video source data and its corresponding captured image to the feature extraction module 6112; the feature extraction module 6112 extracts and matches the feature information that does not depend on ambient light from the two synchronized images; The focus decision module 6113 calculates the focus area according to the input feature information, and calculates the sharpness of the focus area in the video source data and the sharpness (sharpness) of the focus area in the captured image, thereby judging the adjustment direction of the projection lens 621, and the output control The command is sent to the motor, and the motor can drive the projection lens 621 to move under the control of the control command to realize real-time non-sensing focusing, which can solve the thermal defocusing and other defocusing situations of the projection device 62 during the playback process, and provide users with better viewing experience.
请参阅图8,图8是本申请提供的计算机可读存储介质一实施例的结构示意图,计算机可读存储介质80用于存储计算机程序81,计算机程序81在被处理器执行时,用于实现上述实施例中的实时对焦方法。Please refer to FIG. 8. FIG. 8 is a schematic structural diagram of an embodiment of a computer-readable storage medium provided by the present application. The computer-readable storage medium 80 is used to store a computer program 81, and when the computer program 81 is executed by a processor, it is used to realize The real-time focusing method in the above embodiment.
计算机可读存储介质80可以是服务端、U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。The computer-readable storage medium 80 can be a server, a U disk, a mobile hard disk, a read-only memory (ROM, Read-Only Memory), a random access memory (RAM, Random Access Memory), a magnetic disk or an optical disk, etc. medium of program code.
在本申请所提供的几个实施方式中,应该理解到,所揭露的方法以及设备,可以通过其它的方式实现。例如,以上所描述的设备实施方式仅仅 是示意性的,例如,模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。In the several embodiments provided in this application, it should be understood that the disclosed method and device may be implemented in other manners. For example, the device implementations described above are only illustrative. For example, the division of modules or units is only a logical function division. In actual implementation, there may be other divisions, for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施方式方案的目的。Units described as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this implementation manner.
另外,在本申请各个实施方式中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit. The above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
以上仅为本申请的实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。The above are only the embodiments of the present application, and are not intended to limit the scope of the patent of the present application. Any equivalent structure or equivalent process transformation made by using the contents of the description and drawings of the present application, or directly or indirectly applied in other related technical fields, All are similarly included in the scope of patent protection of the present application.
Claims (12)
- 一种实时对焦方法,其特征在于,包括:A real-time focusing method, comprising:获取视频源数据与拍摄图像;Obtain video source data and capture images;对所述视频源数据与所述拍摄图像进行处理,得到所述视频源数据的特征信息与所述拍摄图像的特征信息;Process the video source data and the captured image to obtain feature information of the video source data and feature information of the captured image;基于所述视频源数据的特征信息与所述拍摄图像的特征信息,产生控制指令,将所述控制指令发送至电机,以使得所述电机驱动投影镜头移动,实现对焦。Based on the feature information of the video source data and the feature information of the captured image, a control command is generated, and the control command is sent to a motor, so that the motor drives the projection lens to move and realize focusing.
- 根据权利要求1所述的实时对焦方法,其特征在于,所述基于所述视频源数据的特征信息与所述拍摄图像的特征信息,产生控制指令,将所述控制指令发送至电机,以使得所述电机驱动投影镜头移动的步骤,包括:The real-time focusing method according to claim 1, wherein the control command is generated based on the feature information of the video source data and the feature information of the captured image, and the control command is sent to the motor, so that the The step of the motor driving the projection lens to move includes:基于所述视频源数据的特征信息与所述拍摄图像的特征信息,确定相应的对焦区域;Determine the corresponding focus area based on the feature information of the video source data and the feature information of the captured image;获取所述视频源数据中对焦区域的清晰度,记作第一清晰度;Obtain the sharpness of the focus area in the video source data, and record it as the first sharpness;获取所述拍摄图像中对焦区域的清晰度,记作第二清晰度;Obtain the sharpness of the focus area in the captured image, and record it as the second sharpness;基于所述第一清晰度与所述第二清晰度,确定所述投影镜头的调整方向。Based on the first definition and the second definition, the adjustment direction of the projection lens is determined.
- 根据权利要求2所述的实时对焦方法,其特征在于,所述方法还包括:The real-time focusing method according to claim 2, wherein the method further comprises:利用图像清晰度评价函数计算所述第一清晰度与所述第二清晰度;Calculate the first sharpness and the second sharpness by using an image sharpness evaluation function;计算所述第一清晰度与所述第二清晰度的差值;calculating the difference between the first definition and the second definition;判断所述第一清晰度与所述第二清晰度的差值是否大于预设离焦阈值;judging whether the difference between the first sharpness and the second sharpness is greater than a preset defocus threshold;若是,则控制所述投影镜头向任意方向移动预设距离,返回所述获取视频源数据与拍摄图像的步骤,直至所述第一清晰度与所述第二清晰度的差值小于或等于所述预设离焦阈值;If yes, control the projection lens to move a preset distance in any direction, and return to the step of acquiring the video source data and the captured image, until the difference between the first definition and the second definition is less than or equal to the the preset defocus threshold;若否,则确定对焦成功;If not, it is determined that the focus is successful;其中,所述预设距离小于所述预设离焦阈值。Wherein, the preset distance is smaller than the preset defocus threshold.
- 根据权利要求2所述的实时对焦方法,其特征在于,所述特征信息包括多个特征点,所述基于所述视频源数据的特征信息与所述拍摄图像的特征信息,确定相应的对焦区域的步骤,包括:The real-time focusing method according to claim 2, wherein the feature information includes a plurality of feature points, and the corresponding focus area is determined based on the feature information of the video source data and the feature information of the captured image steps, including:将所述视频源数据中的多个特征点与所述拍摄图像中的多个特征点进行匹配,确定相应的匹配区域;Matching a plurality of feature points in the video source data with a plurality of feature points in the captured image to determine a corresponding matching area;分别对每个所述匹配区域的像素进行处理,得到多个对焦特征点;respectively processing the pixels of each of the matching areas to obtain a plurality of focus feature points;分别将所述多个对焦特征点中相邻的对焦特征点进行连接,得到相应的所述对焦区域。The adjacent focus feature points among the plurality of focus feature points are respectively connected to obtain the corresponding focus area.
- 根据权利要求4所述的实时对焦方法,其特征在于,所述分别对每个所述匹配区域的像素进行处理,得到多个对焦特征点的步骤,包括:The real-time focusing method according to claim 4, wherein the step of separately processing the pixels of each of the matching regions to obtain a plurality of focusing feature points comprises:采用梯度算子或拉普拉斯算子分别对所述视频源数据中的匹配区域与所述拍摄图像中的匹配区域进行处理,得到多个高频像素点;Using a gradient operator or a Laplacian operator to process the matching area in the video source data and the matching area in the captured image, respectively, to obtain a plurality of high-frequency pixel points;判断每个所述高频像素点的像素值是否大于预设像素值;Determine whether the pixel value of each of the high-frequency pixel points is greater than the preset pixel value;若是,则将所述高频像素点作为所述对焦特征点。If so, use the high-frequency pixel point as the focus feature point.
- 根据权利要求1所述的实时对焦方法,其特征在于,所述获取视频源数据与拍摄图像的步骤,包括:The real-time focusing method according to claim 1, wherein the step of acquiring the video source data and the captured image comprises:将所述视频源数据发送至投影装置以进行投影显示;sending the video source data to a projection device for projection display;发送同步拍摄信号至摄像装置,以控制所述摄像装置对投影屏幕上的投影显示图像进行拍摄,得到与所述视频源数据对应的拍摄图像。A synchronous shooting signal is sent to the camera device to control the camera device to shoot the projected display image on the projection screen to obtain the shot image corresponding to the video source data.
- 一种实时对焦装置,其特征在于,包括互相连接的存储器和处理器,其中,所述存储器用于存储计算机程序,所述计算机程序在被所述处理器执行时,用于实现权利要求1-6中任一项所述的实时对焦方法。A real-time focusing device, characterized in that it comprises a memory and a processor that are connected to each other, wherein the memory is used to store a computer program, and when the computer program is executed by the processor, it is used to implement claims 1- The real-time focusing method described in any one of 6.
- 一种投影系统,其特征在于,包括:A projection system, characterized in that, comprising:实时对焦装置,用于接收视频源数据;Real-time focusing device for receiving video source data;投影装置,与所述实时对焦装置连接,用于接收所述实时对焦装置发送的所述视频源数据,进行投影显示,其中,所述投影装置包括互相连接的投影镜头与电机;a projection device, connected to the real-time focusing device, for receiving the video source data sent by the real-time focusing device, and performing projection display, wherein the projection device includes a projection lens and a motor connected to each other;摄像装置,与所述实时对焦装置连接,用于对所述投影装置显示的 投影显示图像进行拍摄,得到与所述视频源数据对应的所述拍摄图像;an imaging device, connected with the real-time focusing device, for shooting the projection display image displayed by the projection device to obtain the captured image corresponding to the video source data;其中,所述实时对焦装置还用于对所述视频源数据与所述拍摄图像进行处理,得到所述视频源数据的特征信息与所述拍摄图像的特征信息;基于所述视频源数据的特征信息与所述拍摄图像的特征信息,产生控制指令,将所述控制指令发送至所述电机,以使得所述电机驱动所述投影镜头移动,实现对焦。Wherein, the real-time focusing device is further configured to process the video source data and the captured image to obtain feature information of the video source data and feature information of the captured image; based on the features of the video source data The information and the feature information of the captured image are used to generate a control command, and the control command is sent to the motor, so that the motor drives the projection lens to move and realize focusing.
- 根据权利要求8所述的投影系统,其特征在于,所述实时对焦装置包括:The projection system according to claim 8, wherein the real-time focusing device comprises:处理器,用于接收所述视频源数据;a processor for receiving the video source data;存储器,与所述处理器连接,用于接收所述处理器发送的所述视频源数据,对所述视频源数据进行存储;a memory, connected to the processor, for receiving the video source data sent by the processor, and storing the video source data;其中,所述处理器还用于从所述存储器中获取所述视频源数据,以及接收所述摄像装置发送的所述拍摄图像。Wherein, the processor is further configured to acquire the video source data from the memory, and receive the captured image sent by the camera device.
- 根据权利要求9所述的投影系统,其特征在于,所述特征信息包括多个特征点,所述投影装置还包括投影屏幕,所述处理器包括:The projection system according to claim 9, wherein the feature information comprises a plurality of feature points, the projection device further comprises a projection screen, and the processor comprises:同步模块,与所述存储器连接,用于接收所述视频源数据与所述拍摄图像,将所述视频源数据存储至所述存储器;a synchronization module, connected with the memory, for receiving the video source data and the captured image, and storing the video source data in the memory;特征提取模块,与所述同步模块连接,用于对接收到的所述视频源数据与所述拍摄图像进行处理,得到所述视频源数据中的多个特征点与所述拍摄图像中的多个特征点;A feature extraction module, connected with the synchronization module, is used to process the received video source data and the captured image, and obtain multiple feature points in the video source data and multiple feature points in the captured image. feature points;对焦决策模块,与所述特征提取模块连接,用于基于所述视频源数据的多个特征点与所述拍摄图像的多个特征点,确定所述投影镜头的调整方向,产生与所述调整方向对应的控制指令,将所述控制指令发送至所述电机。A focus decision-making module, connected with the feature extraction module, is used to determine the adjustment direction of the projection lens based on the multiple feature points of the video source data and the multiple feature points of the captured image, and generate and adjust the adjustment direction of the projection lens. The control command corresponding to the direction is sent to the motor.
- 根据权利要求10所述的投影系统,其特征在于,The projection system according to claim 10, wherein,所述投影系统还包括视频源,所述视频源与所述同步模块连接,用于发送投影数据至所述同步模块,其中,所述投影数据包括至少一帧视频源数据。The projection system further includes a video source, the video source is connected to the synchronization module, and is used for sending projection data to the synchronization module, wherein the projection data includes at least one frame of video source data.
- 一种计算机可读存储介质,用于存储计算机程序,其特征在于, 所述计算机程序在被处理器执行时,用于实现权利要求1-6中任一项所述的实时对焦方法。A computer-readable storage medium for storing a computer program, characterized in that, when the computer program is executed by a processor, the computer program is used to implement the real-time focusing method according to any one of claims 1-6.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010980932.2 | 2020-09-17 | ||
CN202010980932.2A CN114286064A (en) | 2020-09-17 | 2020-09-17 | Real-time focusing method, device, system and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022057670A1 true WO2022057670A1 (en) | 2022-03-24 |
Family
ID=80776392
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/116773 WO2022057670A1 (en) | 2020-09-17 | 2021-09-06 | Real-time focusing method, apparatus and system, and computer-readable storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114286064A (en) |
WO (1) | WO2022057670A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117191805A (en) * | 2023-10-26 | 2023-12-08 | 中导光电设备股份有限公司 | Automatic focusing method and system for AOI (automatic optical inspection) detection head |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114666558B (en) * | 2022-04-13 | 2023-07-25 | 深圳市火乐科技发展有限公司 | Method and device for detecting definition of projection picture, storage medium and projection equipment |
CN114760415B (en) * | 2022-04-18 | 2024-02-02 | 上海千映智能科技有限公司 | Lens focusing method, system, equipment and medium |
CN116095477B (en) * | 2022-08-16 | 2023-10-20 | 荣耀终端有限公司 | Focus processing system, method, equipment and storage medium |
CN115361541B (en) * | 2022-10-20 | 2023-01-24 | 潍坊歌尔电子有限公司 | Method and device for recording projection content of projector, projector and storage medium |
CN118474529A (en) * | 2023-09-27 | 2024-08-09 | 荣耀终端有限公司 | Focusing method of camera, electronic equipment and computer readable storage medium |
CN117319618B (en) * | 2023-11-28 | 2024-03-19 | 维亮(深圳)科技有限公司 | Projector thermal focus out judging method and system for definition evaluation |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080297668A1 (en) * | 2007-05-29 | 2008-12-04 | Konica Minolta Opto, Inc. | Video projection device |
CN101840055A (en) * | 2010-05-28 | 2010-09-22 | 浙江工业大学 | Video Auto Focus System Based on Embedded Media Processor |
CN107942601A (en) * | 2017-12-25 | 2018-04-20 | 天津天地伟业电子工业制造有限公司 | A kind of stepper motor lens focus method based on temperature-compensating |
CN111050150A (en) * | 2019-12-24 | 2020-04-21 | 成都极米科技股份有限公司 | Focal length adjusting method and device, projection equipment and storage medium |
CN113132620A (en) * | 2019-12-31 | 2021-07-16 | 华为技术有限公司 | Image shooting method and related device |
-
2020
- 2020-09-17 CN CN202010980932.2A patent/CN114286064A/en active Pending
-
2021
- 2021-09-06 WO PCT/CN2021/116773 patent/WO2022057670A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080297668A1 (en) * | 2007-05-29 | 2008-12-04 | Konica Minolta Opto, Inc. | Video projection device |
CN101840055A (en) * | 2010-05-28 | 2010-09-22 | 浙江工业大学 | Video Auto Focus System Based on Embedded Media Processor |
CN107942601A (en) * | 2017-12-25 | 2018-04-20 | 天津天地伟业电子工业制造有限公司 | A kind of stepper motor lens focus method based on temperature-compensating |
CN111050150A (en) * | 2019-12-24 | 2020-04-21 | 成都极米科技股份有限公司 | Focal length adjusting method and device, projection equipment and storage medium |
CN113132620A (en) * | 2019-12-31 | 2021-07-16 | 华为技术有限公司 | Image shooting method and related device |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117191805A (en) * | 2023-10-26 | 2023-12-08 | 中导光电设备股份有限公司 | Automatic focusing method and system for AOI (automatic optical inspection) detection head |
CN117191805B (en) * | 2023-10-26 | 2024-04-26 | 中导光电设备股份有限公司 | Automatic focusing method and system for AOI (automatic optical inspection) detection head |
Also Published As
Publication number | Publication date |
---|---|
CN114286064A (en) | 2022-04-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022057670A1 (en) | Real-time focusing method, apparatus and system, and computer-readable storage medium | |
JP3528184B2 (en) | Image signal luminance correction apparatus and luminance correction method | |
US9307134B2 (en) | Automatic setting of zoom, aperture and shutter speed based on scene depth map | |
CN109413336A (en) | Image pickup method, device, electronic equipment and computer readable storage medium | |
JP5197279B2 (en) | Method for tracking the 3D position of an object moving in a scene implemented by a computer | |
US20170256036A1 (en) | Automatic microlens array artifact correction for light-field images | |
US20110221920A1 (en) | Digital photographing apparatus, method of controlling the same, and computer readable storage medium | |
CN105323465B (en) | Image processing device and control method thereof | |
WO2022037633A1 (en) | Calibration method and apparatus for binocular camera, image correction method and apparatus for binocular camera, storage medium, terminal and intelligent device | |
US8754977B2 (en) | Second camera for finding focal target in poorly exposed region of frame taken by first camera | |
US20130113962A1 (en) | Image processing method for producing background blurred image and image capturing device thereof | |
US10298853B2 (en) | Image processing apparatus, method of controlling image processing apparatus, and imaging apparatus | |
CN109922275B (en) | Self-adaptive adjustment method and device of exposure parameters and shooting equipment | |
CN111756996A (en) | Video processing method, video processing apparatus, electronic device, and computer-readable storage medium | |
JP2015501115A (en) | Video processing apparatus and method for detecting temporal synchronization mismatch | |
CN108289170B (en) | Photographing apparatus, method and computer readable medium capable of detecting measurement area | |
CN110245549A (en) | Real-time face and object manipulation | |
CN108156369A (en) | Image processing method and device | |
CN116320335A (en) | Projection equipment and method for adjusting projection picture size | |
US9094601B2 (en) | Image capture device and audio hinting method thereof in focusing | |
CN110677576B (en) | Focusing system of camera module | |
US8982244B2 (en) | Image capturing apparatus for luminance correction, a control method therefor, and a recording medium | |
JP2014126670A (en) | Imaging device and control method of the same, and program | |
CN210781057U (en) | Focusing device of camera module | |
CN110661981B (en) | System for remotely controlling focusing system of camera module |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21868495 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21868495 Country of ref document: EP Kind code of ref document: A1 |