Nothing Special   »   [go: up one dir, main page]

WO2022022319A1 - 一种图像处理方法、电子设备、图像处理系统及芯片系统 - Google Patents

一种图像处理方法、电子设备、图像处理系统及芯片系统 Download PDF

Info

Publication number
WO2022022319A1
WO2022022319A1 PCT/CN2021/107246 CN2021107246W WO2022022319A1 WO 2022022319 A1 WO2022022319 A1 WO 2022022319A1 CN 2021107246 W CN2021107246 W CN 2021107246W WO 2022022319 A1 WO2022022319 A1 WO 2022022319A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature
feature information
information
network model
identification information
Prior art date
Application number
PCT/CN2021/107246
Other languages
English (en)
French (fr)
Inventor
徐彦卿
王四海
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to US18/007,143 priority Critical patent/US20230230343A1/en
Priority to EP21850187.2A priority patent/EP4181016A4/en
Publication of WO2022022319A1 publication Critical patent/WO2022022319A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/285Selection of pattern recognition techniques, e.g. of classifiers in a multi-classifier system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding

Definitions

  • the embodiments of the present application relate to the field of image processing, and in particular, to an image processing method, an electronic device, an image processing system, and a chip system.
  • image processing based on deep neural network model has also developed rapidly.
  • the features of the image can be extracted through a deep neural network model, and then the features of the image can be analyzed to complete the image processing.
  • the image processing can include: object detection, semantic segmentation, panoramic segmentation, and image classification.
  • the deep neural network model based on deep learning can be divided into two parts from the functional point of view: the feature extraction network model and the feature analysis network model.
  • the feature extraction network model is used to extract the features of the image; the feature analysis network model is used to analyze and process the features of the image to complete the corresponding image processing tasks.
  • multiple different feature analysis network models can share the same feature extraction network model.
  • the image features used by multiple different feature analysis network models may be different, and the image features extracted by the same feature extraction network model cannot take into account the various feature analysis network models, resulting in poor image processing effects of the feature analysis network model.
  • the embodiments of the present application provide an image processing method, an electronic device, an image processing system and a chip system, which solve the problem of poor image processing effect of a feature analysis network model in a multi-task deep neural network model.
  • a first aspect provides an image processing method, including: a first device extracts feature information of an image to be processed through at least one pre-stored feature extraction network model; the first device identifies the extracted feature information to obtain identification information of the feature information ; The first device sends the feature information of the image to be processed and the identification information of the feature information to the second device to instruct the second device to select a feature analysis network model corresponding to the identification information to process the feature information.
  • the first device may instruct the second device that has multiple feature analysis network models to select the feature analysis network model corresponding to the identification information according to the identification information to process and receive the identification information.
  • the feature analysis network model in the second device may correspond to multiple feature extraction network models in the first device, or correspond to multiple feature feature extraction network models in the first device.
  • the multiple feature analysis network models in the second device can determine, according to the identification information, into which feature analysis network model to input the feature information to complete the corresponding It avoids the problem of poor image processing effect caused by the fact that the feature information of the same feature extraction network model cannot meet the requirements of multiple feature analysis network models at the same time.
  • the first device identifies the extracted feature information, and obtaining the identification information of the feature information includes: the first device obtains the identification of the feature extraction network model from which the feature information is extracted; the first device The identification of the feature extraction network model for extracting the feature information is taken as the identification information of the feature information.
  • the identification of the feature extraction network model that extracts the feature information may be used as the identification information of the feature information.
  • the second device may determine a feature extraction network model for extracting the feature information according to the identification information of the feature information, so as to select an appropriate feature analysis network model to analyze the received feature information to complete the corresponding image processing task.
  • the first device identifies the extracted feature information, and obtaining the identification information of the feature information includes: the first device obtains an identification of an output level of the feature information, wherein the output of the feature information The level is the level of outputting feature information in the feature extraction network model for extracting feature information; the first device takes the identification of the output level of the feature information as identification information of the feature information.
  • the identification of the output level of the feature information may be used as the identification information of the feature information.
  • the second device may determine an output level for outputting the feature information according to the identification information of the feature information, so as to select an appropriate feature analysis network model to analyze the received feature information to complete the corresponding image processing task.
  • the first device identifies the extracted feature information, and obtaining the identification information of the feature information includes: the first device obtains the identification of the feature extraction network model from which the feature information is extracted; the first device Obtain the identification of the output level of the feature information, wherein the output level of the feature information is the level of the output feature information in the feature extraction network model for extracting the feature information; the first device will extract the feature information of the feature extraction network model.
  • the identification of the output level is the identification information of the feature information.
  • the identification of the feature extraction network model for extracting the feature information and the identification of the output level of the feature information may be used as the identification information of the feature information.
  • the second device can determine the feature extraction network model and output level of the feature information according to the identification information of the feature information, so as to select an appropriate feature analysis network model to analyze the received feature information to complete the corresponding image processing task.
  • any one of the methods for generating identification information listed above may be selected according to actual requirements, thereby improving the flexibility of the application of the embodiments of the present application.
  • a second aspect provides an image processing method, comprising: a second device acquiring feature information of an image to be processed and identification information of the feature information sent by a first device connected to the second device; and the second device according to the identification information of the feature information The information determines a feature analysis network model for processing the feature information; the second device inputs the feature information of the image to be processed into the determined feature analysis network model to obtain an image processing result.
  • the second device determining the feature analysis network model for processing the feature information according to the identification information of the feature information includes: the second device acquiring the correspondence between the identification information and the feature analysis network model; According to the corresponding relationship, the second device uses the feature analysis network model corresponding to the identification information of the feature information as the feature analysis network model for processing the feature information.
  • the identification information of the feature information includes: an identification of a feature extraction network model for extracting the feature information; and/or an identification of an output level of the feature information, wherein the output level of the feature information The level of output feature information in the feature extraction network model for extracting feature information.
  • a third aspect provides an electronic device, comprising:
  • a feature information extraction unit configured to extract feature information of the image to be processed through at least one pre-stored feature extraction network model
  • an identification information generating unit used for identifying the extracted feature information, and obtaining identification information of the feature information
  • the information sending unit is configured to send the feature information of the image to be processed and the identification information of the feature information to the second device, so as to instruct the second device to select a feature analysis network model corresponding to the identification information to process the feature information.
  • a fourth aspect provides an electronic device, comprising:
  • an information acquisition unit configured to acquire the feature information of the image to be processed and the identification information of the feature information sent by the connected first device;
  • a model determining unit configured to determine a feature analysis network model for processing the feature information according to the identification information of the feature information
  • the image processing unit is used for inputting the feature information of the image to be processed into the determined feature analysis network model to obtain the image processing result.
  • a fifth aspect provides an electronic device, including a processor, where the processor is configured to run a computer program stored in a memory, so as to implement the method of any one of the first aspect of the present application.
  • a sixth aspect provides an electronic device, including a processor, where the processor is configured to run a computer program stored in a memory to implement the method of any one of the second aspect of the present application.
  • a seventh aspect provides an image processing system, including at least one electronic device provided in the fifth aspect and at least one electronic device provided in the sixth aspect.
  • An eighth aspect provides a chip system, including a processor, the processor is coupled to a memory, and the processor executes a computer program stored in the memory to implement any one of the methods of the first aspect of the present application and/or any one of the second aspect Methods.
  • a ninth aspect provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by one or more processors, implements the method and/or the second aspect of any one of the first aspects of the present application any of the methods.
  • a tenth aspect provides a computer program product that, when the computer program product runs on a device, causes the device to perform any one of the methods in the first aspect and/or any one of the methods in the second aspect.
  • FIG. 1 is a schematic diagram of an application scenario of an image processing method provided by an embodiment of the present application.
  • FIG. 2 is an exemplary diagram of an image processing method provided by an embodiment of the present application.
  • FIG. 3 is another example diagram of the image processing method provided by the embodiment of the present application.
  • FIG. 4 is a schematic diagram of a hardware structure of an electronic device for executing an image processing method provided by an embodiment of the present application
  • FIG. 5 is a schematic diagram of a processing process of an image processing method performed by a first device and a second device according to an embodiment of the present application;
  • FIG. 6 is a schematic flowchart of an image processing method performed by a first device according to an embodiment of the present application
  • FIG. 7 is a schematic diagram of the correspondence between a kind of identification information and a feature analysis network model in the image processing method provided in the embodiment of the application;
  • Fig. 8 is the corresponding relation schematic diagram of another kind of identification information and feature analysis network model in the image processing method that provides in the embodiment of this application;
  • FIG. 9 is a schematic diagram of a corresponding relationship between another identification information and a feature analysis network model in the image processing method provided in the embodiment of the application;
  • FIG. 10 is a schematic flowchart of an image processing method performed by a second device according to an embodiment of the present application.
  • FIG. 11 is a schematic block diagram of functional architecture modules of a first device and a second device according to an embodiment of the present application.
  • one or more refers to one, two or more; "and/or”, which describes the association relationship of associated objects, indicates that there may be three kinds of relationships; for example, A and/or B can mean that A exists alone, A and B exist simultaneously, and B exists independently, wherein A and B can be singular or plural.
  • the character “/” generally indicates that the associated objects are an "or" relationship.
  • references in this specification to "one embodiment” or “some embodiments” and the like mean that a particular feature, structure or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically emphasized otherwise.
  • the terms “including”, “including”, “having” and their variants mean “including but not limited to” unless specifically emphasized otherwise.
  • FIG. 1 is an application scenario of the image processing method provided by the embodiment of the present application.
  • FIG. 1 corresponding to a cloud platform that performs the image feature analysis process, there may be multiple camera devices that perform the image feature extraction process. Only 3 cameras are shown in the figure. In practical applications, more or Fewer cameras. In an example of this application scenario, these cameras are set on different roads. In another example of this application scenario, these camera devices are installed in factories, such as workshops, offices, entrances and exits, garage entrances, and the like. For these cameras, feature extraction network models may be stored in some or all of the cameras.
  • the feature extraction network models in the cameras can extract feature information and/or feature information of image frames in the videos.
  • the feature information of the image the camera device sends the feature information to the cloud platform, and the cloud platform selects an appropriate feature analysis network model based on the image processing task to be performed to process the received feature information.
  • the above-mentioned application scenario can perform image processing according to an example of the image processing method shown in FIG. 2.
  • multiple feature analysis network models (for example, the image classification network model in the illustration, the target detection network model and semantic segmentation network model) share a feature extraction network model.
  • the feature extraction network model can be loaded in each camera device, and multiple feature analysis network models can be loaded in the cloud platform.
  • Multiple feature analysis network models in the cloud platform share feature information extracted by a feature extraction network model in the camera device.
  • the camera device and the cloud platform can establish a communication connection in a wireless manner.
  • the feature analysis network model that may be used for target detection requires a high-performance feature extraction network model; the feature analysis network model used for image classification does not need to use high-performance feature extraction.
  • the network model in order to take into account the high-demand feature analysis network model, a high-performance feature analysis network model is used, which leads to problems such as a large amount of calculation and serious memory consumption when the camera device performs feature extraction each time.
  • the example of the image processing method shown in FIG. 3 can also be used.
  • different feature extraction network models can be set for each camera device according to the application.
  • the images collected by the camera device installed at the entrance and exit of the factory garage are mainly used for vehicle detection.
  • the feature extraction network model suitable for vehicle detection can be loaded into the camera device of the .
  • the images collected by the camera installed at the entrance and exit of the factory are mainly used for face recognition. Therefore, the camera installed at the entrance and exit of the factory can load a feature extraction network model suitable for face recognition.
  • the cloud platform can load a feature analysis network model suitable for vehicle detection and a feature extraction network model suitable for face recognition.
  • the feature information sent by the camera set at the entrance and exit gate of the factory may be received, and the feature information sent by the camera set at the entrance and exit of the factory garage may also be received.
  • the feature analysis network model is used to complete the corresponding image processing task.
  • identification information can be generated for the feature information. For example, a rule is preset: the identifier of the feature extraction network model suitable for vehicle detection is 0, and the identifier of the feature extraction network model suitable for face recognition is 1.
  • the camera device After the camera device obtains the feature information, it can also generate identification information for the feature information based on the identification of the feature extraction network model, the camera device sends the feature information and the identification information of the feature information to the cloud platform, and the cloud platform can identify the received identification of the feature information. Then, according to the identification information of the received feature information, an appropriate feature analysis network model is selected to process the received feature information, thereby obtaining an image processing result.
  • the feature extraction network model in the first camera device is model 1
  • the extracted feature information is feature information A
  • the corresponding identification information is 00
  • the feature extraction network model in the second camera is model 2
  • the extracted feature information is feature information B
  • the corresponding identification information is 01
  • the feature extraction network model in the third camera is model 3
  • the extracted feature The information is characteristic information C
  • the corresponding identification information is 10.
  • model ⁇ for image classification tasks
  • model ⁇ for object detection tasks
  • model ⁇ for semantic segmentation tasks
  • the correspondence between the camera device, the feature extraction network model stored in the camera device, the identification information, the image processing task and the feature analysis network model stored in the cloud platform is:
  • these camera devices autonomously send the extracted feature information and identification information to the cloud platform.
  • the cloud platform can input the feature information whose identification information is 00 into the model ⁇ corresponding to the identification information 00 according to the above corresponding relationship to complete the image classification task; the cloud platform can input the feature information whose identification information is 01 into the The model ⁇ corresponding to the identification information 01 is used to complete the target detection task; the cloud platform can input the feature information whose identification information is 10 into the model ⁇ corresponding to the identification information 10 according to the above corresponding relationship to complete the semantic segmentation task.
  • the user when the user needs to perform an image classification task, the user sends an execution instruction of the image classification task to the cloud platform, and the cloud platform sends a feature extraction instruction to the first camera corresponding to the image classification task through the above-mentioned correspondence, and the first Each camera device inputs the collected images into model 1, obtains feature information, and generates identification information 00 of the feature information.
  • the first camera device sends the feature information and identification information 00 to the cloud platform, and the cloud platform receives the feature information and identification information. After 00, the feature information is input into the model ⁇ corresponding to the identification information 00 to complete the image classification task.
  • the user determines to perform the image feature extraction process through the third camera device through the above-mentioned corresponding relationship, the user sends an execution instruction to the third camera device, and the third camera device passes the model 3.
  • the characteristic information of the image to be processed is obtained, and the identification information 10 of the characteristic information is generated.
  • the third camera device sends the characteristic information and identification information 10 to the cloud platform.
  • the cloud platform After the cloud platform receives the characteristic information and identification information 10, the cloud platform sends the characteristic information and identification information 10.
  • the information is input into the model ⁇ corresponding to the identification information 10 to complete the semantic segmentation task.
  • a feature extraction network model can be stored in each camera. In practical applications, multiple feature extraction network models may also be stored in each camera device.
  • a new feature extraction network model can be added to one or more cameras that need to be updated, or the new feature extraction network model can replace the old feature extraction network. model, and correspondingly set the unique identifier of the feature extraction network model. It is not necessary to add a new feature extraction network model to each camera device that has a network connection relationship with the cloud platform or to replace the old feature extraction network model with the new feature extraction network model. Therefore, the image processing method shown in FIG. 3 increases the flexibility and scalability of network update or deployment.
  • the electronic device where the feature extraction network model is located is denoted as the first device, and the steps performed by the first device are denoted as the image feature extraction process.
  • the electronic device where the feature analysis network model is located is recorded as the second device, and the steps performed by the second device are recorded as the image feature analysis process.
  • the first device and the second device jointly complete the image processing method.
  • the image processing system includes multiple first devices and a second device.
  • the image processing system may also include a first device and a second device; the image processing system may also include Including a first device and a plurality of second devices, the image processing system may also include a plurality of first devices and a plurality of second devices.
  • the first device may determine to send to the corresponding second device according to the identification information of the feature information.
  • the image processing system includes: multiple first devices and multiple second devices, the first device, the feature extraction network model stored in the first device, the identification information, the image processing task, the second device, the second device
  • the correspondence between the feature analysis network models stored in the device is:
  • the first camera device Model 1—Identification information 00—Image classification task—Cloud platform 1—Model ⁇ ;
  • the first camera extracts the feature information of the image through model 1, it generates identification information 00, and sends the feature information and identification information 00 to cloud platform 1, and cloud platform 1 inputs the feature information of identification information 00 into model ⁇ ;
  • the first camera extracts the characteristic information of the image through the model 2, it generates the identification information 01, and sends the characteristic information and the identification information 01 to the server 2, and the server 2 inputs the characteristic information of the identification information 01 into the model ⁇ .
  • the image processing task can also be performed as follows: the first camera device extracts a set of feature information of the image through model 1, and generates identification information 00; extracts a set of feature information of the image through model 2, and Generate identification information 01, the first camera device sends two sets of characteristic information and corresponding identification information to cloud platform 1 and server 2, cloud platform 1 selects the characteristic information input of identification information 00 from the received two sets of characteristic information For the model ⁇ , the server 2 selects the feature information of the identification information 01 from the received two sets of feature information and inputs it into the model ⁇ .
  • the image processing system may include at least one second device, wherein the second device may store multiple feature analysis network models, or may store one feature analysis network model.
  • the camera device and cloud platform in the above application scenarios and corresponding examples are only examples of the first device and the second device.
  • the first device may be other electronic devices other than the camera device, and the second device It can also be other electronic devices other than the cloud platform.
  • an image processing method provided in an embodiment of the present application may be applied to a first device, where the first device may be an electronic device with a camera, such as a camera, a mobile phone, or a tablet computer.
  • the first device may also not have a camera, but accept images or videos sent by other electronic devices with cameras.
  • the image processing method provided in the embodiment of the present application may also be applied to a second device, and the second device may be an electronic device with image feature analysis capability, such as a cloud platform, a server, a computer, a notebook, and a mobile phone.
  • the first device and the second device may be the same electronic device, for example, both the first device and the second device may be mobile phones.
  • Both the image feature extraction process and the image feature analysis process are executed in the processor of the mobile phone, or the image feature extraction process is executed in the first processing of the mobile phone, and the image feature analysis process is executed in the second processor of the mobile phone.
  • At least one of the first device and the second device constitutes an image processing system.
  • FIG. 4 shows a schematic structural diagram of an electronic device.
  • the electronic device can be used as the first device to perform the image feature extraction process in the image processing method, can also be used as the second device to perform the image feature analysis process in the image processing method, and can also be used as an electronic device to perform the image feature extraction process in the image processing method. Extraction process and image feature analysis process.
  • the electronic device 400 may include a processor 410, an external memory interface 420, an internal memory 421, a universal serial bus (USB) interface 430, a charge management module 440, a power management module 441, a battery 442, an antenna 1, an antenna 2 , mobile communication module 450, wireless communication module 460, audio module 470, speaker 470A, receiver 470B, microphone 470C, headphone jack 470D, sensor module 480, buttons 490, motor 491, indicator 492, camera 493, display screen 494, and Subscriber identification module (subscriber identification module, SIM) card interface 495 and so on.
  • SIM Subscriber identification module
  • the sensor module 480 may include a pressure sensor 480A, a gyroscope sensor 480B, an air pressure sensor 480C, a magnetic sensor 480D, an acceleration sensor 480E, a distance sensor 480F, a proximity light sensor 480G, a fingerprint sensor 480H, a temperature sensor 480J, a touch sensor 480K, and ambient light.
  • the structures illustrated in the embodiments of the present application do not constitute a specific limitation on the electronic device 400 .
  • the electronic device 400 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software or a combination of software and hardware.
  • the processor 410 may include one or more processing units, for example, the processor 410 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processor
  • graphics processor graphics processor
  • image signal processor image signal processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the processor 410 is configured to perform the image feature extraction process in the image processing method in the embodiment of the present application, for example, the following steps 601 to 603, and/or execute the image in the image processing method in the embodiment of the present application
  • the feature analysis process is, for example, the following steps 1001 to 1003.
  • the controller may be the nerve center and command center of the electronic device 400 .
  • the controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 410 for storing instructions and data.
  • the memory in processor 410 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 410 . If the processor 410 needs to use the instruction or data again, it can be called directly from memory. Repeated accesses are avoided, and the waiting time of the processor 410 is reduced, thereby improving the efficiency of the system.
  • processor 410 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver (universal asynchronous transmitter) receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus that includes a serial data line (SDA) and a serial clock line (SCL).
  • processor 410 may contain multiple sets of I2C buses.
  • the processor 410 can be respectively coupled to the touch sensor 480K, the charger, the flash, the camera 493 and the like through different I2C bus interfaces.
  • the processor 410 may couple the touch sensor 480K through the I2C interface, so that the processor 410 communicates with the touch sensor 480K through the I2C bus interface, so as to realize the touch function of the electronic device 400 .
  • the I2S interface can be used for audio communication.
  • processor 410 may contain multiple sets of I2S buses.
  • the processor 410 may be coupled with the audio module 470 through an I2S bus to implement communication between the processor 410 and the audio module 470 .
  • the PCM interface can also be used for audio communications, sampling, quantizing and encoding analog signals.
  • audio module 470 and wireless communication module 460 may be coupled through a PCM bus interface.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • the MIPI interface can be used to connect the processor 410 with peripheral devices such as the display screen 494 and the camera 493 .
  • MIPI interfaces include camera serial interface (CSI), display serial interface (DSI), etc.
  • the processor 410 communicates with the camera 493 through a CSI interface to implement the photographing function of the electronic device 400 .
  • the processor 410 communicates with the display screen 494 through the DSI interface to implement the display function of the electronic device 400 .
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface may be used to connect the processor 410 with the camera 493, the display screen 494, the wireless communication module 460, the audio module 470, the sensor module 480, and the like.
  • the GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 430 is an interface that conforms to the USB standard specification, and specifically may be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 430 can be used to connect a charger to charge the electronic device 400, and can also be used to transmit data between the electronic device 400 and peripheral devices. It can also be used to connect headphones to play audio through the headphones.
  • the interface can also be used to connect other electronic devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in the embodiments of the present application is only a schematic illustration, and does not constitute a structural limitation of the electronic device 400 .
  • the electronic device 400 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 440 is used to receive charging input from the charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 440 may receive charging input from the wired charger through the USB interface 430 .
  • the charging management module 440 may receive wireless charging input through a wireless charging coil of the electronic device 400 . While the charging management module 440 charges the battery 442 , it can also supply power to the electronic device through the power management module 441 .
  • the power management module 441 is used to connect the battery 442 , the charging management module 440 and the processor 410 .
  • the power management module 441 receives input from the battery 442 and/or the charging management module 440, and supplies power to the processor 410, the internal memory 421, the external memory, the display screen 494, the camera 493, and the wireless communication module 460.
  • the power management module 441 can also be used to monitor parameters such as battery capacity, battery cycle times, battery health status (leakage, impedance).
  • the power management module 441 may also be provided in the processor 410 . In other embodiments, the power management module 441 and the charging management module 440 may also be provided in the same device.
  • the wireless communication function of the electronic device 400 may be implemented by the antenna 1, the antenna 2, the mobile communication module 450, the wireless communication module 460, the modulation and demodulation processor, the baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 400 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 450 may provide a wireless communication solution including 2G/3G/4G/5G etc. applied on the electronic device 400 .
  • the mobile communication module 450 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), and the like.
  • the mobile communication module 450 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 450 can also amplify the signal modulated by the modulation and demodulation processor, and then convert it into electromagnetic waves for radiation through the antenna 1 .
  • At least part of the functional modules of the mobile communication module 450 may be provided in the processor 410 . In some embodiments, at least part of the functional modules of the mobile communication module 450 may be provided in the same device as at least part of the modules of the processor 410 .
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low frequency baseband signal is processed by the baseband processor and passed to the application processor.
  • the application processor outputs sound signals through audio devices (not limited to speaker 470A, receiver 470B, etc.), or displays images or videos through display screen 494.
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 410, and may be provided in the same device as the mobile communication module 450 or other functional modules.
  • the wireless communication module 460 can provide applications on the electronic device 400 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellites Wireless communication solutions such as global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), and infrared technology (IR).
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared technology
  • the wireless communication module 460 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 460 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 410 .
  • the wireless communication module 460 can also receive the signal to be sent from the processor 410 , perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through the antenna 2 .
  • the antenna 1 of the electronic device 400 is coupled with the mobile communication module 450, and the antenna 2 is coupled with the wireless communication module 460, so that the electronic device 400 can communicate with the network and other devices through wireless communication technology.
  • Wireless communication technologies may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband code division Multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM , and/or IR technology, etc.
  • GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi-zenith) satellite system, QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the electronic device 400 implements a display function through a GPU, a display screen 494, an application processor, and the like.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 494 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 410 may include one or more GPUs that execute program instructions to generate or alter display information.
  • Display screen 494 is used to display images, video, and the like.
  • Display screen 494 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • LED diode AMOLED
  • flexible light-emitting diode flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on.
  • the electronic device 400 may include 1 or N display screens 494, where N is a positive integer greater than 1.
  • the electronic device 400 can realize the shooting function through the ISP, the camera 493, the video codec, the GPU, the display screen 494 and the application processor.
  • the ISP is used to process the data fed back by the camera 493 .
  • the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin tone. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 493 .
  • Camera 493 is used to capture still images or video.
  • the object is projected through the lens to generate an optical image onto the photosensitive element.
  • the photosensitive element can be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the electronic device 400 may include 1 or N cameras 493 , where N is a positive integer greater than 1.
  • a digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals. For example, when the electronic device 400 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy, and the like.
  • Video codecs are used to compress or decompress digital video.
  • Electronic device 400 may support one or more video codecs.
  • the electronic device 400 can play or record videos in various encoding formats, such as: moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, and so on.
  • MPEG moving picture experts group
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the electronic device 400 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the NPU or other processors may be used to perform operations such as face detection, face tracking, face feature extraction, and image clustering on the face images in the video stored by the electronic device 400; 400 performs operations such as face detection and face feature extraction on the face images in the pictures stored, and performs clustering on the pictures stored in the electronic device 400 according to the face features of the pictures and the clustering results of the face images in the video.
  • the external memory interface 420 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 400.
  • the external memory card communicates with the processor 410 through the external memory interface 420 to realize the data storage function. For example to save files like music, video etc in external memory card.
  • Internal memory 421 may be used to store computer executable program code, which includes instructions.
  • the processor 410 executes various functional applications and data processing of the electronic device 400 by executing the instructions stored in the internal memory 421 .
  • the internal memory 421 may include a storage program area and a storage data area.
  • the storage program area can store an operating system and an application program required for at least one function (such as a sound playback function, an image playback function, etc.).
  • the storage data area can store data (such as audio data, phone book, etc.) created during the use of the electronic device 400 .
  • the internal memory 421 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • non-volatile memory such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the electronic device 400 may implement audio functions through an audio module 470, a speaker 470A, a receiver 470B, a microphone 470C, an earphone interface 470D, and an application processor. Such as music playback, recording, etc.
  • the audio module 470 is used for converting digital audio signals to analog audio signal outputs, and also for converting analog audio inputs to digital audio signals. Audio module 470 may also be used to encode and decode audio signals. In some embodiments, the audio module 470 may be provided in the processor 410 , or some functional modules of the audio module 470 may be provided in the processor 410 .
  • Speaker 470A also referred to as a "speaker" is used to convert audio electrical signals into sound signals.
  • the electronic device 400 can listen to music through the speaker 470A, or listen to a hands-free call.
  • the receiver 470B also referred to as "earpiece" is used to convert audio electrical signals into sound signals.
  • the voice can be answered by placing the receiver 470B close to the human ear.
  • Microphone 4270C also known as “microphone”, “microphone”, is used to convert sound signals into electrical signals.
  • the user can make a sound by approaching the microphone 470C through the human mouth, and input the sound signal into the microphone 470C.
  • the electronic device 400 may be provided with at least one microphone 470C. In other embodiments, the electronic device 400 may be provided with two microphones 470C, which can implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 400 may also be provided with three, four or more microphones 470C.
  • the headphone jack 470D is used to connect wired headphones.
  • the earphone interface 470D can be a USB interface 430, or can be a 3.5mm open mobile terminal platform (OMTP) standard interface, a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the pressure sensor 480A is used to sense pressure signals, and can convert the pressure signals into electrical signals.
  • pressure sensor 480A may be provided on display screen 494 .
  • the capacitive pressure sensor may be comprised of at least two parallel plates of conductive material. When a force is applied to pressure sensor 480A, the capacitance between the electrodes changes.
  • the electronic device 400 determines the intensity of the pressure according to the change in capacitance. When a touch operation acts on the display screen 494, the electronic device 400 detects the intensity of the touch operation according to the pressure sensor 480A.
  • the electronic device 400 may also calculate the touched position according to the detection signal of the pressure sensor 480A.
  • the gyro sensor 480B can be used to determine the motion attitude of the electronic device 400 .
  • the angular velocity of electronic device 400 about three axes may be determined by gyro sensor 480B.
  • the gyro sensor 480B can be used for image stabilization. Exemplarily, when the shutter is pressed, the gyro sensor 480B detects the shaking angle of the electronic device 400, calculates the distance to be compensated by the lens module according to the angle, and allows the lens to counteract the shaking of the electronic device 400 through reverse motion to achieve anti-shake.
  • the gyroscope sensor 480B can also be used for navigation and somatosensory game scenarios.
  • Air pressure sensor 480C is used to measure air pressure. In some embodiments, the electronic device 400 calculates the altitude through the air pressure value measured by the air pressure sensor 480C to assist in positioning and navigation.
  • Magnetic sensor 480D includes a Hall sensor.
  • the electronic device 400 can detect the opening and closing of the flip holster using the magnetic sensor 480D.
  • the electronic device 400 can detect the opening and closing of the flip according to the magnetic sensor 480D. Further, according to the detected opening and closing state of the leather case or the opening and closing state of the flip cover, characteristics such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 480E can detect the magnitude of the acceleration of the electronic device 400 in various directions (generally three axes).
  • the magnitude and direction of gravity can be detected when the electronic device 400 is stationary. It can also be used to identify the posture of electronic devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
  • the electronic device 400 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 400 can use the distance sensor 480F to measure the distance to achieve fast focusing.
  • Proximity light sensor 480G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
  • the light emitting diodes may be infrared light emitting diodes.
  • the electronic device 400 emits infrared light to the outside through light emitting diodes.
  • Electronic device 400 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it may be determined that there is an object near the electronic device 400 . When insufficient reflected light is detected, the electronic device 400 may determine that there is no object near the electronic device 400 .
  • the electronic device 400 can use the proximity light sensor 480G to detect that the user holds the electronic device 400 close to the ear to talk, so as to automatically turn off the screen to save power.
  • Proximity light sensor 480G can also be used in holster mode, pocket mode automatically unlock and lock screen.
  • the ambient light sensor 480L is used to sense ambient light brightness.
  • the electronic device 400 can adaptively adjust the brightness of the display screen 494 according to the perceived ambient light brightness.
  • the ambient light sensor 480L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 480L can also cooperate with the proximity light sensor 480G to detect whether the electronic device 400 is in the pocket to prevent accidental touch.
  • the fingerprint sensor 480H is used to collect fingerprints.
  • the electronic device 400 can use the collected fingerprint characteristics to realize fingerprint unlocking, accessing application locks, taking photos with fingerprints, answering incoming calls with fingerprints, and the like.
  • the temperature sensor 480J is used to detect the temperature.
  • the electronic device 400 utilizes the temperature detected by the temperature sensor 480J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 480J exceeds a threshold, the electronic device 400 performs a reduction in the performance of the processor located near the temperature sensor 480J in order to reduce power consumption and implement thermal protection.
  • the electronic device 400 heats the battery 442 to avoid abnormal shutdown of the electronic device 400 caused by the low temperature.
  • the electronic device 400 when the temperature is lower than another threshold, the electronic device 400 performs boosting on the output voltage of the battery 442 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 480K also called “touch panel”.
  • the touch sensor 480K may be disposed on the display screen 494, and the touch sensor 480K and the display screen 494 form a touch screen, also called a "touch screen”.
  • the touch sensor 480K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to touch operations may be provided through display screen 494 .
  • the touch sensor 480K may also be disposed on the surface of the electronic device 400 at a different location than the display screen 494 .
  • the bone conduction sensor 480M can acquire vibration signals.
  • the bone conduction sensor 480M can acquire the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 480M can also contact the pulse of the human body and receive the blood pressure beating signal.
  • the bone conduction sensor 480M can also be disposed in the earphone, combined with the bone conduction earphone.
  • the audio module 470 can analyze the voice signal based on the vibration signal of the vocal vibration bone block obtained by the bone conduction sensor 480M, so as to realize the voice function.
  • the application processor can analyze the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 480M, and realize the function of heart rate detection.
  • the keys 490 include a power-on key, a volume key, and the like. Keys 490 may be mechanical keys. It can also be a touch key.
  • the electronic device 400 may receive key inputs and generate key signal inputs related to user settings and function control of the electronic device 400 .
  • Motor 491 can generate vibrating cues.
  • the motor 491 can be used for vibrating alerts for incoming calls, and can also be used for touch vibration feedback.
  • touch operations acting on different applications can correspond to different vibration feedback effects.
  • the motor 491 can also correspond to different vibration feedback effects for touch operations on different areas of the display screen 494 .
  • Different application scenarios for example: time reminder, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 492 may be an indicator light, which may be used to indicate the charging status, the change of power, and may also be used to indicate messages, missed calls, notifications, and the like.
  • the SIM card interface 495 is used to connect a SIM card.
  • the SIM card can be inserted into the SIM card interface 495 or pulled out from the SIM card interface 495 to achieve contact and separation with the electronic device 400 .
  • the electronic device 400 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • the SIM card interface 495 can support Nano SIM card, Micro SIM card, SIM card and so on.
  • the same SIM card interface 495 can insert multiple cards at the same time. Multiple cards can be of the same type or of different types.
  • the SIM card interface 495 can also be compatible with different types of SIM cards.
  • the SIM card interface 495 is also compatible with external memory cards.
  • the electronic device 400 interacts with the network through the SIM card to implement functions such as calls and data communication.
  • the electronic device 400 employs an eSIM, ie: an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 400 and cannot be separated from the electronic device 400 .
  • the server includes a processor and a communication interface.
  • the specific structure of the execution body that executes the image feature extraction process and the image feature analysis process is not particularly limited, as long as the image feature extraction process and/or the image feature extraction process in the image processing method according to the embodiment of the present application can be recorded by running Or the program of the code of the image feature analysis process, it is sufficient to communicate with the image feature extraction process and/or the image feature analysis process in the image processing method according to the embodiment of the present application.
  • the execution body of an image processing method provided in this embodiment of the present application may be a functional module in the first device that can call a program and execute the program, or a device applied in the first device, such as a chip; this application implements
  • the execution body of the image processing method provided by the example may be a functional module in the second device that can call and execute a program, or a device applied in the second device, such as a chip.
  • the description is given by taking as an example that multiple camera devices (each camera device respectively loads a feature extraction network model) corresponding to one cloud platform to complete the image processing task.
  • each camera device respectively loads a feature extraction network model corresponding to one cloud platform to complete the image processing task.
  • the camera device can load a feature extraction network model, or can Load multiple feature extraction network models.
  • FIG. 5 is an example diagram of an image processing method performed by a first device and a second device according to an embodiment of the present application.
  • the first device loads a feature analysis network model, and the first device extracts the feature analysis network model through the feature analysis network model.
  • the feature information of the image to be processed the first device generates the identification information of the feature information, the first device sends the feature information and the identification information to the second device, and the second device loads the image classification network model, target detection network model and semantic segmentation network model , the second device identifies the identification information of the received feature information, and selects a feature analysis network model corresponding to the identification information according to the received identification information of the feature information, and inputs the feature information into the selected feature analysis network model.
  • Complete the corresponding image processing tasks the image processing tasks.
  • FIG. 6 is a schematic flowchart of an image processing method provided by an embodiment of the present application. As shown in the figure, the method is applied to a first device, and the method includes:
  • Step 601 The first device extracts feature information of the image to be processed through at least one pre-stored feature extraction network model.
  • the feature extraction network model may include: a VGG model, a ResNet model, an Inception model, and the like. It may also be other feature extraction network models other than the models listed above, which are not limited in this embodiment of the present application.
  • the feature information of the image includes a feature map obtained after processing the image to be processed by the feature extraction network model in the first device.
  • Step 602 the first device identifies the extracted feature information to obtain identification information of the feature information.
  • the feature extraction network model in the first device is used to extract feature information of the image
  • the feature analysis network model in the second device is used to perform corresponding image processing tasks on the image based on the feature information of the image.
  • the A model is used to obtain the image classification result according to the feature information of the image
  • the B model is used to obtain the target detection result according to the feature information of the image
  • the C model is used to obtain the target detection result according to the feature information of the image.
  • the feature information of the image is used to obtain the semantic segmentation result.
  • the feature information of the image can be identified according to the image processing task to be performed, so that it is convenient for the second device to receive the feature information of the image. According to the identification information, it is determined to select the A model, the B model or the C model for the subsequent image processing task.
  • the identification information of the feature information can be determined according to the image processing task to be performed.
  • the identification information of the A model used to obtain the image classification result according to the feature information of the image is 00 in the second device.
  • the identification information of the B model used to obtain the target detection result according to the feature information of the image is 01
  • the identification information of the C model used to obtain the semantic segmentation result according to the feature information of the image in the second device is 11.
  • an appropriate feature extraction network model can be selected according to the image processing task to be performed on the image to be processed. Then, according to the image processing task to be performed, the extracted feature information is identified.
  • the feature information can also be extracted and identified in other ways, for details, please refer to the related descriptions in the subsequent FIGS. 7 to 9 .
  • the identification information of the feature information can be set to 00; when the image processing task of the image to be processed is target detection, the identification information of the feature information can be set to 01. When the image processing task for the image to be processed is semantic segmentation, the identification information of the feature information can be set to 11.
  • the identification information of the feature information may also be generated according to other information, for details, please refer to the subsequent description.
  • Step 603 The first device sends the feature information of the image to be processed and the identification information of the feature information to the second device to instruct the second device to select a feature analysis network model corresponding to the identification information to process the feature information.
  • the first device when the first device and the second device are not the same device, the first device sends the feature information of the image to be processed and the identification information of the feature information to the second device.
  • the first device and the second device are For the same device, the following situations may exist:
  • the feature extraction network model is located in the first processor, and the feature analysis network model is located in the second processor.
  • Sending the feature information of the image to be processed and the identification information of the feature information by the first device to the second device includes: the first processor of the first device sending the feature information of the image to be processed and the identification information of the feature information to the second processor of the first device. identification information.
  • Case 2 The feature extraction network model and the feature analysis network model are located in the same processor.
  • Sending the feature information of the image to be processed and the identification information of the feature information by the first device to the second device includes: the feature extraction function module of the first device sends the feature information of the image to be processed and the identification information of the feature information to the feature analysis function module of the first device. Identification information, wherein the feature extraction function module stores the feature extraction network model, and the feature analysis function module stores the feature analysis network model.
  • the first device generates identification information by using the feature information of the image to be processed, so as to instruct the second device to select a corresponding feature analysis network model according to the identification information after receiving the identification information of the feature information to complete the corresponding image processing task, so that the feature information obtained by the feature analysis network model in the second device is matched feature information, and the problem of poor image processing effect of multitasking in the second device is improved.
  • the first device may obtain the identification information of the feature information in the following manner:
  • the first device obtains the identifier of the feature extraction network model for extracting feature information
  • the first device takes the identification of the feature extraction network model from which the feature information is extracted as the identification information of the feature information.
  • model 1 can meet the image processing task of model B
  • model 2 can meet the image processing task of model C
  • model 3 can meet the image processing task of model A.
  • model 1 can meet the image processing task of model B
  • model 2 can meet the image processing task of model C
  • model 3 can meet the image processing task of model A.
  • the feature information obtained by Model 3 may not only meet the requirements of Model A described above, but also meet the requirements of Model B.
  • the process of extracting feature information from Model 3 Compared with model 1 the process of extracting feature information occupies a larger memory, resulting in a waste of resources.
  • the corresponding relationship between the identification information and the feature analysis network model is set to the corresponding relationship shown in Figure 7, that is, the feature information extracted by model 3 is input into model A, and the model 1 The extracted feature information is input into model B.
  • the user can input the image to be processed into the model 3 in the first device according to the corresponding relationship shown in FIG. 7 , and the model 3 in the first device outputs the image to be processed.
  • the feature information and the identification information 10 of the feature information The first device sends the feature information of the image to be processed and the identification information 10 of the feature information to the second device.
  • the second device receives the feature information and the corresponding identification information 10, the second device The device determines the model A as the target model according to the corresponding relationship, and inputs the feature information into the model A, so as to obtain the image processing result desired by the user.
  • the first device may obtain the identification information of the feature information in the following manner:
  • the first device obtains the identification of the output level of the feature information, wherein the output level of the feature information is the level of the output feature information in the feature extraction network model for extracting the feature information;
  • the first device takes the identification of the output level of the characteristic information as identification information of the characteristic information.
  • the feature extraction network model may have multiple levels.
  • the structure of the feature extraction network model may have multiple convolutional layers, multiple pooling layers, and fully connected layers. It exists in the form of convolutional layers, pooling layers, convolutional layers, pooling layers, ..., fully connected layers. There may be the following relationship between these layers: the output of the previous layer is the input of the next layer, and finally the feature information of the output image of one layer is obtained.
  • a feature analysis network model not only requires the output of the last level, but may also require the output of one or more levels in the middle; maybe a feature analysis network model does not require the output of the last level, while is an output that requires one or more intermediate levels.
  • a specific level of the feature extraction network model in the first device can be set as the feature information of the output level output image; and the corresponding identifier is generated according to the output level of the output feature information. information so that the second device can select an appropriate feature analysis network model.
  • the layers that can output the feature information of the image can be used as the output layer of the feature information.
  • the above example of the feature extraction network model does not mean to limit the structure of the feature extraction network model.
  • the above feature extraction network model may also be a VGG, DenseNet, or a feature extraction network model with a feature pyramid structure.
  • the image processing task in the second device is correspondingly provided with two feature analysis network models: model A and model B, wherein, the image processing task corresponding to model A has a
  • the requirements are: the feature information output from the output level 2 to the output level 5 of the model 1
  • the requirements for the feature information of the image processing task corresponding to the model B are: the feature information output from the output level 3 to the output level 5 of the model 1.
  • the feature extraction network model in the first device has 4 output levels: output level 2, output level 3, output level 4, and output level 5.
  • the identification information corresponding to the feature information of the output level 2 to the output level 5 of the model 1 may be set to 0, and the identification information corresponding to the feature information of the output level 3 to the output level 5 of the model 1 may be set to 1.
  • Model 1 to describe how to use the identifier of the output level as the identifier information of the feature information.
  • the first device can obtain the identification information of the feature information in the following manner:
  • the first device obtains the identifier of the feature extraction network model for extracting feature information
  • the first device obtains the identification of the output level of the feature information, wherein the output level of the feature information is the level of the output feature information in the feature extraction network model for extracting the feature information;
  • the first device takes the identification of the feature extraction network model for extracting the feature information and the identification of the output level of the feature information as identification information of the feature information.
  • model 1 there are two feature extraction network models in the first device: model 1 and model 2.
  • the identifier of model 1 may be 0, and the identifier of model 2 may be 1.
  • Model 1 corresponds to 1 output level;
  • Model 2 corresponds to 4 output levels: output level 2, output level 3, output level 4 and output level 5.
  • the identifiers of the output level 2 to the output level 4 of the model 2 are 0, and the identifiers of the output level 3 to the output level 5 of the model 2 are 1.
  • the identifier information of the feature information obtained by the model 1 is 0X, (can be 00, It can also be 01), the identification information of the feature information obtained from the output level 2 to the output level 4 of the model 2 is 10, and the identification information of the feature information obtained from the output level 3 to the output level 5 of the model 2 is 11.
  • Each feature analysis network model in the device has the following corresponding relationship for feature information: 00—Model A, 10—Model B, 11—Model C, or 01—Model A, 10—Model B, 11—Model C .
  • the user When the user needs to perform the image processing task corresponding to the model B on the image to be processed, the user searches for the corresponding relationship and determines that the feature information obtained from the output level 2 to the output level 4 of the model 2 in the first device is required. To control output level 2 to output level 4 of model 2 to output feature information, the user can also send instructions to the first device through the second device on the second device side to control the output level 2 to output level of model 2 in the first device 4 Output feature information.
  • the identification 1 of the extraction model of the feature information can be generated, the identification 0 of the output level of the feature information can be generated, and 10 is used as the identification information of the feature information,
  • the first device sends the feature information and the identification information 10 to the second device, and the second device inputs the feature information into the model B according to the identification information 10 to obtain an image processing result.
  • the identification of the feature extraction network model for extracting feature information is used as the identification information of the feature information
  • the identification of the output level of the feature information is used as the identification information of the feature information, or both are used as the identification information of the feature information.
  • the identification information can be set according to the actual situation.
  • Divide the fields corresponding to the identification information a first field for representing the feature extraction network model for extracting feature information and a second field for representing an output level of the feature information.
  • the first field occupies m bytes, and the second field occupies n bytes.
  • the value of m may be determined according to the number of feature extraction network models in the first device, and the value of n may be determined according to the number of output forms of the feature extraction network models.
  • m and n can be set relatively large, for example, m is set to 4 bytes, which can cover up to 24 feature extraction network models.
  • Setting n to 4 bytes can cover up to 24 output forms of the feature extraction network model.
  • the output form represents a set of output levels that output feature information, such as a set of output levels corresponding to a feature extraction network model ⁇ output level 1 ⁇ , ⁇ output level 3 ⁇ , ⁇ output level 2 to output level 4 ⁇ and ⁇ output level Level 3 to Output Level 5 ⁇ are 4 different output forms.
  • m and n may also be other values, which are not limited in this application.
  • the first field and the second field can be regarded as an integral field, and m consecutive bytes exist in the integral field to represent the feature extraction network model for extracting the feature information, There are n consecutive bytes representing the output level of feature information.
  • the overall field may also include bytes representing other meanings, and the embodiment of the present application does not limit the total number of bytes of the overall field.
  • the first field and the second field can be used as two completely independent fields.
  • the first independent field includes the first field, and can also include at least one byte for It is used to distinguish whether the current independent field is the identity of the extraction model or the identity of the output level.
  • the second independent field includes the second field, and may further include at least one byte for distinguishing whether the current independent field is the identification of the extraction model or the identification of the output level.
  • the first device sends the first independent field and the second independent field to the second device as identification information of the feature information.
  • the first byte in the first independent field is 0, indicating that m consecutive bytes stored in the independent field are the identifiers of the extraction model, and the independent field also includes m connected words Section is used to represent the feature extraction network model that extracts feature information.
  • the first byte in the second independent field is 1, indicating that the n consecutive bytes stored in the independent field are the identifiers of the output level, and the independent field also includes n concatenated bytes for Indicates the output level of feature information.
  • FIG. 10 is a schematic flowchart of an image processing method provided by an embodiment of the present application. As shown in the figure, the method is applied to a second device, and the method includes:
  • Step 1001 The second device acquires the feature information of the image to be processed and the identification information of the feature information sent by the first device connected to the second device.
  • Step 1002 the second device determines a feature analysis network model for processing the feature information according to the identification information of the feature information.
  • Step 1003 the second device inputs the feature information of the image to be processed into the determined feature analysis network model to obtain the image processing result.
  • the second device needs to cooperate with the first device to complete the image processing task, the first device completes the image feature extraction task, and the second device analyzes the feature information extracted by the first device to obtain the image processing result. Therefore, there is a connection between the first device and the second device, which may be a wired connection or a wireless connection.
  • the feature information extracted by the first device has identification information, and the identification information is used to instruct the second device to select an appropriate feature analysis network model to complete the corresponding image processing task. After acquiring the identification information of the feature information of the image to be processed, the second device needs to determine a feature analysis network model for processing the feature information according to the identification information.
  • the identification information of the characteristic information obtained by the second device includes:
  • the identification of the output level of the feature information where the output level of the feature information is the level of the output feature information in the feature extraction network model for extracting the feature information.
  • the second device determining the feature analysis network model for processing the feature information according to the identification information of the feature information includes:
  • the second device obtains the correspondence between the identification information and the feature analysis network model
  • the second device uses the feature analysis network model corresponding to the identification information of the feature information as the feature analysis network model for processing the feature information.
  • the corresponding relationship may include not only the identification information and the corresponding relationship between the feature analysis network models, but also the corresponding relationship of other information.
  • the described correspondence is the correspondence between the first device, the feature extraction network model stored in the first device, the identification information, the image processing task and the feature analysis network model stored in the second device .
  • the first device and the second device may be divided into functional units according to the foregoing method examples.
  • each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one in the processing unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units. It should be noted that the division of units in the embodiments of the present application is illustrative, and is only a logical function division, and other division methods may be used in actual implementation. The following is an example of dividing each functional unit corresponding to each function to illustrate:
  • the first device 1110 includes:
  • the feature information extraction unit 1111 is used to extract the feature information of the image to be processed through at least one pre-stored feature extraction network model;
  • An identification information generating unit 1112 configured to identify the extracted feature information to obtain identification information of the feature information
  • the information sending unit 1113 is configured to send the feature information of the image to be processed and the identification information of the feature information to the second device, so as to instruct the second device to select a feature analysis network model corresponding to the identification information to process the feature information.
  • the identification information generating unit 1112 is further configured to:
  • the identification of the feature extraction network model for extracting the feature information is obtained; the identification of the feature extraction network model for extracting the feature information is used as the identification information of the feature information.
  • the identification information generating unit 1112 is further configured to:
  • the identification of the output level of the feature information is obtained, wherein the output level of the feature information is the level of the output feature information in the feature extraction network model for extracting the feature information; the identification of the output level of the feature information is used as the identification information of the feature information.
  • the identification information generating unit 1112 is further configured to:
  • the output level of the feature information is the level of the output feature information in the feature extraction network model for extracting the feature information
  • the identification of the feature extraction network model for extracting the feature information and the identification of the output level of the feature information are used as the identification information of the feature information.
  • the second device 1120 includes:
  • an information acquisition unit 1121 configured to acquire the feature information of the image to be processed and the identification information of the feature information sent by the connected first device;
  • a model determining unit 1122 configured to determine a feature analysis network model for processing the feature information according to the identification information of the feature information
  • the image processing unit 1123 is configured to input the feature information of the image to be processed into the determined feature analysis network model to obtain the image processing result.
  • the model determining unit 1122 is further configured to:
  • the identification information of the feature information includes:
  • the identification of the output level of the feature information where the output level of the feature information is the level of the output feature information in the feature extraction network model for extracting the feature information.
  • each functional unit in the embodiment may be integrated in one processing unit, or each unit may exist physically alone, or two or more units may be integrated in one unit, and the above-mentioned integrated units may be implemented in the form of hardware. , can also be implemented in the form of software functional units.
  • the specific names of the functional units are only for the convenience of distinguishing from each other, and are not used to limit the protection scope of the present application.
  • Embodiments of the present application further provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps in the foregoing method embodiments can be implemented.
  • the embodiments of the present application further provide a computer program product, which enables the first device to implement the steps in the foregoing method embodiments when the computer program product runs on the first device.
  • the integrated unit if implemented as a software functional unit and sold or used as a stand-alone product, may be stored in a computer-readable storage medium.
  • the present application realizes all or part of the processes in the methods of the above-mentioned embodiments, which can be completed by instructing the relevant hardware through a computer program, and the computer program can be stored in a computer-readable storage medium, and the computer program is stored in a computer-readable storage medium.
  • the steps of the foregoing method embodiments may be implemented.
  • the computer program includes computer program code
  • the computer program code may be in the form of source code, object code, executable file or some intermediate forms, and the like.
  • the computer-readable medium may include at least: any entity or device capable of carrying the computer program code to the first device, a recording medium, a computer memory, a read-only memory (ROM, Read-Only Memory), a random access memory (RAM, Random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media.
  • ROM read-only memory
  • RAM random access memory
  • electrical carrier signals telecommunications signals
  • software distribution media For example, U disk, mobile hard disk, disk or CD, etc.
  • computer readable media may not be electrical carrier signals and telecommunications signals.
  • An embodiment of the present application further provides a chip system, the chip system includes a processor, the processor is coupled to a memory, and the processor executes a computer program stored in the memory to implement the steps of any method embodiment of the present application.
  • the chip system may be a single chip, or a chip module composed of multiple chips.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

一种图像处理方法、电子设备、图像处理系统及芯片系统,涉及图像处理技术领域,可以解决多任务的深度神经网络模型中特征分析网络模型的图像处理效果较差的问题;第一设备通过特征提取网络模型提取待处理图像的特征信息;第一设备对提取的特征信息进行标识,获得特征信息的标识信息;第一设备向第二设备发送待处理图像的特征信息和特征信息的标识信息;第二设备接收到第一设备发送的特征信息以及对应的标识信息后,选择与标识信息对应的特征分析网络模型处理接收到的特征信息,获得图像处理结果。

Description

一种图像处理方法、电子设备、图像处理系统及芯片系统
本申请要求于2020年07月28日提交国家知识产权局、申请号为202010742689.0、申请名称为“一种图像处理方法、电子设备、图像处理系统及芯片系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及图像处理领域,尤其涉及一种图像处理方法、电子设备、图像处理系统及芯片系统。
背景技术
随着深度学习理论的发展,基于深度神经网络模型的图像处理也得到快速发展。例如,可以通过深度神经网络模型提取图像的特征,然后对图像的特征进行分析以完成对图像的处理,图像的处理可以包括:目标检测、语义分割、全景分割和图像分类等。
基于深度学习的深度神经网络模型从功能的角度可以分为两部分:特征提取网络模型和特征分析网络模型。特征提取网络模型用于提取图像的特征;特征分析网络模型用于对图像的特征进行分析处理,以完成相应的图像处理任务。在多任务的图像处理过程中,为了降低深度神经网络模型的参数,降低深度神经网络模型训练的任务量,多个不同的特征分析网络模型可以共用同一个特征提取网络模型。然而,多个不同的特征分析网络模型采用的图像特征可能会存在差别,共用同一个特征提取网络模型提取的图像特征无法兼顾各个特征分析网络模型,导致特征分析网络模型的图像处理效果较差。
发明内容
本申请实施例提供一种图像处理方法、电子设备、图像处理系统及芯片系统,解决多任务的深度神经网络模型中特征分析网络模型的图像处理效果较差的问题。
为达到上述目的,本申请采用如下技术方案:
第一方面提供一种图像处理方法,包括:第一设备通过预先存储的至少一个特征提取网络模型提取待处理图像的特征信息;第一设备对提取的特征信息进行标识,获得特征信息的标识信息;第一设备向第二设备发送待处理图像的特征信息和特征信息的标识信息,以指示第二设备选择与标识信息对应的特征分析网络模型处理特征信息。
本申请实施例中,第一设备对提取的特征信息进行标识获得标识信息之后,可以指示存在多个特征分析网络模型的第二设备根据标识信息选择与该标识信息对应的特征分析网络模型处理接收到的特征信息。该方法中,第二设备中的特征分析网络模型可以对应多个第一设备中的特征提取网络模型,或者对应第一设备中的多个特征特征提取网络模型。当存在多个第一设备或者第一设备内存在多个特征提取网络模型时,第二设备中的多个特征分析网络模型可以根据标识信息确定将特征信息输入哪个特征 分析网络模型中以完成相应的图像处理任务,避免了同一特征提取网络模型的特征信息无法同时满足多个特征分析网络模型的需求导致的图像处理效果较差的问题。
在第一方面的一种可能的实现方式中,第一设备对提取的特征信息进行标识,获得特征信息的标识信息包括:第一设备获得提取特征信息的特征提取网络模型的标识;第一设备将提取特征信息的特征提取网络模型的标识作为特征信息的标识信息。
在一些示例中,可以将提取特征信息的特征提取网络模型的标识作为特征信息的标识信息。第二设备可以根据特征信息的标识信息确定提取该特征信息的特征提取网络模型,从而选择合适的特征分析网络模型分析接收到的特征信息,以完成相应的图像处理任务。
在第一方面的一种可能的实现方式中,第一设备对提取的特征信息进行标识,获得特征信息的标识信息包括:第一设备获得特征信息的输出层级的标识,其中,特征信息的输出层级为提取特征信息的特征提取网络模型中输出特征信息的层级;第一设备将特征信息的输出层级的标识作为特征信息的标识信息。
在一些示例中,可以将特征信息的输出层级的标识作为特征信息的标识信息。第二设备可以根据特征信息的标识信息确定输出该特征信息的输出层级,从而选择合适的特征分析网络模型分析接收到的特征信息,以完成相应的图像处理任务。
在第一方面的一种可能的实现方式中,第一设备对提取的特征信息进行标识,获得特征信息的标识信息包括:第一设备获得提取特征信息的特征提取网络模型的标识;第一设备获得特征信息的输出层级的标识,其中,特征信息的输出层级为提取特征信息的特征提取网络模型中输出特征信息的层级;第一设备将提取特征信息的特征提取网络模型的标识和特征信息的输出层级的标识作为特征信息的标识信息。
在一些示例中,可以将提取特征信息的特征提取网络模型的标识和特征信息的输出层级的标识作为特征信息的标识信息。第二设备可以根据特征信息的标识信息确定该特征信息的特征提取网络模型和输出层级,从而选择合适的特征分析网络模型分析接收到的特征信息,以完成相应的图像处理任务。
实际应用中,可以根据实际需求选择上述列举的任一种标识信息的生成方法,从而提高了本申请实施例应用时的灵活性。
第二方面提供一种图像处理方法,包括:第二设备获取与所述第二设备连接的第一设备发送的待处理图像的特征信息和特征信息的标识信息;第二设备根据特征信息的标识信息确定处理所述特征信息的特征分析网络模型;第二设备将待处理图像的特征信息输入确定的特征分析网络模型,获得图像处理结果。
在第二方面的一种可能的实现方式中,第二设备根据特征信息的标识信息确定处理所述特征信息的特征分析网络模型包括:第二设备获取标识信息与特征分析网络模型的对应关系;第二设备根据对应关系,将与特征信息的标识信息对应的特征分析网络模型作为处理特征信息的特征分析网络模型。
在第二方面的一种可能的实现方式中,特征信息的标识信息包括:提取特征信息的特征提取网络模型的标识;和/或,特征信息的输出层级的标识,其中,特征信息的输出层级为提取特征信息的特征提取网络模型中输出特征信息的层级。
第三方面提供一种电子设备,包括:
特征信息提取单元,用于通过预先存储的至少一个特征提取网络模型提取待处理图像的特征信息;
标识信息生成单元,用于对提取的特征信息进行标识,获得特征信息的标识信息;
信息发送单元,用于向第二设备发送待处理图像的特征信息和特征信息的标识信息,以指示第二设备选择与标识信息对应的特征分析网络模型处理特征信息。
第四方面提供一种电子设备,包括:
信息获取单元,用于获取连接的第一设备发送的待处理图像的特征信息和特征信息的标识信息;
模型确定单元,用于根据特征信息的标识信息确定处理所述特征信息的特征分析网络模型;
图像处理单元,用于将待处理图像的特征信息输入确定的特征分析网络模型,获得图像处理结果。
第五方面提供一种电子设备,包括处理器,处理器用于运行存储器中存储的计算机程序,以实现本申请第一方面任一项的方法。
第六方面提供一种电子设备,包括处理器,处理器用于运行存储器中存储的计算机程序,实现本申请第二方面任一项的方法。
第七方面提供一种图像处理系统,包括至少一个第五方面提供的电子设备和至少一个第六方面提供的电子设备。
第八方面提供一种芯片系统,包括处理器,处理器与存储器耦合,处理器执行存储器中存储的计算机程序,以实现本申请第一方面任一项的方法和/或第二方面任一项的方法。
第九方面提供一种计算机可读存储介质,计算机可读存储介质存储有计算机程序,计算机程序被一个或多个处理器执行时实现本申请第一方面任一项的方法和/或第二方面任一项的方法。
第十方面提供了一种计算机程序产品,当计算机程序产品在设备上运行时,使得设备执行上述第一方面中任一项方法和/或第二方面任一项的方法。
可以理解的是,上述第二方面至第十方面的有益效果可以参见上述第一方面中的相关描述,在此不再赘述。
附图说明
图1为本申请实施例提供的图像处理方法的一种应用场景示意图;
图2为本申请实施例提供的图像处理方法的一种示例图;
图3为本申请实施例提供的图像处理方法的另一示例图;
图4为本申请实施例提供的执行图像处理方法的一种电子设备的硬件结构示意图;
图5为本申请实施例提供的第一设备和第二设备执行图像处理方法的处理过程示意图;
图6为本申请实施例提供的一种第一设备执行图像处理方法的流程示意图;
图7为本申请实施例中提供的图像处理方法中一种标识信息与特征分析网络模型的对应关系示意图;
图8为本申请实施例中提供的图像处理方法中另一种标识信息与特征分析网络模 型的对应关系示意图;
图9为本申请实施例中提供的图像处理方法中另一种标识信息与特征分析网络模型的对应关系示意图;
图10为本申请实施例提供的一种第二设备执行图像处理方法的流程示意图;
图11为本申请实施例提供的第一设备和第二设备的功能架构模块的示意框图。
具体实施方式
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。
应当理解,当在本申请说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。
还应当理解,在本申请实施例中,“一个或多个”是指一个、两个或两个以上;“和/或”,描述关联对象的关联关系,表示可以存在三种关系;例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A、B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。
另外,在本申请说明书和所附权利要求书的描述中,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。
在本申请说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
本申请实施例可以应用于多个图像处理任务的应用场景中。图像处理过程包括图像特征提取过程和图像特征分析过程。参见图1,图1为本申请实施例提供的图像处理方法的应用场景。如图1所示,对应于一个执行图像特征分析过程的云平台,可能存在多个执行图像特征提取过程的摄像装置,图示中仅示出3个摄像装置,实际应用中可以设置更多或更少数量的摄像装置。该应用场景的一个示例中,这些摄像装置设置在不同的道路上。该应用场景的另一示例中,这些摄像装置设置在工厂中,例如设置在工厂的车间、办公室、进出大门和车库进出口等。对于这些摄像装置,可能部分或者全部摄像装置中存储有特征提取网络模型,摄像装置采集视频和/或图像后,摄像装置内的特征提取网络模型可以提取视频中的图像帧的特征信息和/或图像的特征信息,摄像装置将特征信息发送至云平台,云平台基于待执行的图像处理任务选择合适的特征分析网络模型处理接收到的特征信息。
上述应用场景可以按照图2所示的图像处理方法的一种示例进行图像处理,图2所示的示例中,多个特征分析网络模型(例如,图示中的图像分类网络模型、目标检测网络模型和语义分割网络模型)共用一个特征提取网络模型。特征提取网络模型可以加载在各个摄像装置中,多个特征分析网络模型可以加载在云平台中。云平台中的 多个特征分析网络模型共用摄像装置中的一个特征提取网络模型提取的特征信息。摄像装置和云平台可以通过无线的方式建立通信连接。
如果按照图2所示的图像处理方法进行图像处理,可能存在以下问题:
(1)为了适应云平台中特征分析网络模型的需求,可能需要对摄像装置中的特征提取网络模型进行更新。当需要对摄像装置中的特征提取网络模型更新时,由于摄像装置众多且分布范围较广(例如城市道路上的摄像装置),更新的代价非常高,导致按照图2所示的示例的图像处理方法在应用的过程中部署、更新时灵活性较差。
(2)为了适应云平台中新加入的特征分析网络模型的需求,在摄像装置中加入新的性能更好的特征提取网络模型时,很可能会出现原有的特征分析网络模型无法识别新加入的特征提取网络模型提取的特征信息,导致云平台中的一些特征分析网络模型无法完成图像处理任务或图像处理效果较差,即无法保证同一特征提取网络模型能够完全适于所有的特征分析网络模型。
(3)对于云平台中不同的图像处理任务,可能用于目标检测的特征分析网络模型需要高性能的特征提取网络模型;用于图像分类的特征分析网络模型则没有必要采用高性能的特征提取网络模型,为了兼顾高要求的特征分析网络模型均采用高性能的特征分析网络模型,导致摄像装置每次进行特征提取时计算量大,内存消耗严重等问题。
为解决图2所示的示例存在的上述问题,还可以采用图3所示的图像处理方法的示例。如图3所示,每个摄像装置可以根据应用场合设置不同的特征提取网络模型,例如,设置在工厂车库进出口处的摄像装置采集的图像主要用于车辆检测,因此,设置在车库进出口的摄像装置中可以加载适于车辆检测的特征提取网络模型。设置在工厂进出大门处的摄像装置采集的图像主要用于人脸识别,因此,设置在工厂进出大门处的摄像装置可以加载适于人脸识别的特征提取网络模型。云平台中可以加载适于车辆检测的特征分析网络模型和适于人脸识别的特征提取网络模型。对于云平台,可能接收到设置在工厂进出大门处的摄像装置发送的特征信息,也可能收到设置在工厂车库进出口处的摄像装置发送的特征信息,为了便于云平台将特征信息输入合适的特征分析网络模型以完成相应的图像处理任务,可以在摄像装置中提取到图像的特征信息后,为特征信息生成标识信息。例如,预先设置了规则:适于车辆检测的特征提取网络模型的标识为0,适于人脸识别的特征提取网络模型的标识为1。摄像装置获得特征信息后,还可以基于特征提取网络模型的标识为特征信息生成标识信息,摄像装置将特征信息以及特征信息的标识信息发送给云平台,云平台可以识别接收到的特征信息的标识信息,然后根据接收到的特征信息的标识信息选择合适的特征分析网络模型对接收到的特征信息进行处理,从而获得图像处理结果。
作为举例,对应于执行图像特征分析过程的云平台,存在三个摄像装置,第一个摄像装置中的特征提取网络模型为模型1,提取的特征信息为特征信息A,对应的标识信息为00,第二个摄像装置中的特征提取网络模型为模型2,提取的特征信息为特征信息B,对应的标识信息为01,第三个摄像装置中的特征提取网络模型为模型3,提取的特征信息为特征信息C,对应的标识信息为10。
云平台中可能存储三个特征分析网络模型:执行图像分类任务的模型α,执行目标检测任务的模型β,执行语义分割任务的模型γ。
其中,摄像装置、摄像装置中存储的特征提取网络模型、标识信息、图像处理任务和 云平台中存储的特征分析网络模型之间的对应关系为:
第一个摄像装置—模型1—标识信息00—图像分类任务—模型α;
第二个摄像装置—模型2—标识信息01—目标检测任务—模型β;
第三个摄像装置—模型3—标识信息10—语义分割任务—模型γ。
一个示例中,这些摄像装置自主将提取的特征信息以及标识信息发送给云平台。云平台可以根据上述对应关系,将标识信息为00的特征信息输入到标识信息00对应的模型α中以完成图像分类任务;云平台可以根据上述对应关系,将标识信息为01的特征信息输入到标识信息01对应的模型β中以完成目标检测任务;云平台可以根据上述对应关系,将标识信息为10的特征信息输入到标识信息10对应的模型γ中以完成语义分割任务。
另一个示例中,当用户需要进行图像分类任务时,向云平台发送图像分类任务的执行指令,云平台通过上述对应关系,向图像分类任务对应的第一个摄像装置发送特征提取指令,第一个摄像装置将采集的图像输入模型1,获得特征信息,并生成特征信息的标识信息00,第一个摄像装置将特征信息和标识信息00发送至云平台,云平台接收到特征信息和标识信息00后,将特征信息输入到标识信息00对应的模型α中以完成图像分类任务。
另一个示例中,当用户需要进行语义分割任务时,用户通过上述对应关系确定通过第三个摄像装置执行图像特征提取过程,用户向第三个摄像装置发送执行指令,第三摄像装置通过模型3获得待处理图像的特征信息,并生成特征信息的标识信息10,第三个摄像装置将特征信息和标识信息10发送至云平台,云平台接收到特征信息和标识信息10后,云平台将特征信息输入到标识信息10对应的模型γ中以完成语义分割任务。
上述三个示例中每个摄像装置内可以存储一个特征提取网络模型。实际应用中,每个摄像装置内也可以存储多个特征提取网络模型。
按照图3所示的图像处理方法,在网络模型需要更新时,可以在需要更新的一个或多个摄像装置中增加新的特征提取网络模型或将新的特征提取网络模型替换旧的特征提取网络模型,并对应设置该特征提取网络模型的唯一标识。没有必要将与云平台存在网络连接关系的每个摄像装置中均增加新的特征提取网络模型或将新的特征提取网络模型替换旧的特征提取网络模型。因此图3所示图像处理方法增加了网络更新或部署的灵活性和可扩展性。为了便于描述,将特征提取网络模型所在的电子设备记为第一设备,第一设备执行的步骤记为图像特征提取过程。将特征分析网络模型所在的电子设备记为第二设备,第二设备执行的步骤记为图像特征分析过程。第一设备和第二设备共同完成图像处理方法。
另外,图1所示应用场景中,图像处理系统包括多个第一设备和一个第二设备,实际应用中,图像处理系统也可能包括一个第一设备和一个第二设备;图像处理系统还可能包括一个第一设备和多个第二设备,图像处理系统还可能包括多个第一设备和多个第二设备。当存在多个第二设备时,第一设备可以根据特征信息的标识信息确定发送给相应的第二设备。
作为举例,当图像处理系统包括:多个第一设备和多个第二设备时,第一设备、第一设备中存储的特征提取网络模型、标识信息、图像处理任务、第二设备、第二设备中存储的特征分析网络模型之间的对应关系为:
第一个摄像装置—模型1—标识信息00—图像分类任务—云平台1—模型α;
第一个摄像装置—模型2—标识信息01—目标检测任务—服务器2—模型β;
第二个摄像装置—模型3—标识信息10—语义分割任务—云平台1—模型γ。
该示例中,第一个摄像装置通过模型1提取图像的特征信息后,生成标识信息00,并向云平台1发送特征信息和标识信息00,云平台1将标识信息00的特征信息输入模型α;第一个摄像装置通过模型2提取图像的特征信息后,生成标识信息01,并向服务器2发送特征信息和标识信息01,服务器2将标识信息01的特征信息输入模型β。
当然,该示例中,也可以按照如下方式进行图像处理任务:第一个摄像装置通过模型1提取图像的一组特征信息,并生成标识信息00;通过模型2提取图像的一组特征信息,并生成标识信息01,第一个摄像装置将两组特征信息和对应的标识信息均发送至云平台1和服务器2,云平台1从接收到的两组特征信息中选择标识信息00的特征信息输入模型α,服务器2从接收到的两组特征信息中选择标识信息01的特征信息输入模型β。
基于上述示例可以理解,图像处理系统中可以包括至少一个第二设备,其中,第二设备中可以存储多个特征分析网络模型,也可以存储一个特征分析网络模型。
需要说明,上述应用场景以及对应的示例中的摄像装置和云平台仅用于第一设备和第二设备的示例,实际应用中,第一设备可以是摄像装置以外的其他电子设备,第二设备也可以是云平台以外的其他电子设备。
作为举例,本申请实施例提供一种图像处理方法可以适用于第一设备中,该第一设备可以为摄像装置、手机、平板电脑等带有摄像头的电子设备。当然,第一设备也可以不具有摄像头,而是接受其他具有摄像头的电子设备发送的图像或者视频。本申请实施例提供的一种图像处理方法还可以适用于第二设备,第二设备可以为云平台、服务器、计算机、笔记本、手机等具有图像特征分析能力的电子设备。当然,实际应用中,第一设备和第二设备可以为同一电子设备,例如,第一设备和第二设备均可以为手机。图像特征提取过程和图像特征分析过程均在手机的处理器中执行,或者图像特征提取过程在手机的第一处理中执行,图像特征分析过程在手机的第二处理器中执行。至少一个第一设备和第二设备组成图像处理系统。
图4示出了一种电子设备的结构示意图。该电子设备可以作为第一设备执行图像处理方法中的图像特征提取过程,也可以作为第二设备执行图像处理方法中的图像特征分析过程,还可以作为一个电子设备执行图像处理方法中的图像特征提取过程和图像特征分析过程。电子设备400可以包括处理器410,外部存储器接口420,内部存储器421,通用串行总线(universal serial bus,USB)接口430,充电管理模块440,电源管理模块441,电池442,天线1,天线2,移动通信模块450,无线通信模块460,音频模块470,扬声器470A,受话器470B,麦克风470C,耳机接口470D,传感器模块480,按键490,马达491,指示器492,摄像头493,显示屏494,以及用户标识模块(subscriber identification module,SIM)卡接口495等。其中传感器模块480可以包括压力传感器480A,陀螺仪传感器480B,气压传感器480C,磁传感器480D,加速度传感器480E,距离传感器480F,接近光传感器480G,指纹传感器480H,温度传感器480J,触摸传感器480K,环境光传感器480L,骨传导传感器480M等。
可以理解的是,本申请实施例示意的结构并不构成对电子设备400的具体限定。在本申请另一些实施例中,电子设备400可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬 件的组合实现。
处理器410可以包括一个或多个处理单元,例如:处理器410可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。例如,处理器410用于执行本申请实施例中的图像处理方法中的图像特征提取过程,例如,下述步骤601~步骤603,和/或执行本申请实施例中的图像处理方法中的图像特征分析过程,例如,下述步骤1001~步骤1003。
其中,控制器可以是电子设备400的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器410中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器410中的存储器为高速缓冲存储器。该存储器可以保存处理器410刚用过或循环使用的指令或数据。如果处理器410需要再次使用该指令或数据,可从存储器中直接调用。避免了重复存取,减少了处理器410的等待时间,因而提高了系统的效率。
在一些实施例中,处理器410可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器410可以包含多组I2C总线。处理器410可以通过不同的I2C总线接口分别耦合触摸传感器480K,充电器,闪光灯,摄像头493等。例如:处理器410可以通过I2C接口耦合触摸传感器480K,使处理器410与触摸传感器480K通过I2C总线接口通信,实现电子设备400的触摸功能。
I2S接口可以用于音频通信。在一些实施例中,处理器410可以包含多组I2S总线。处理器410可以通过I2S总线与音频模块470耦合,实现处理器410与音频模块470之间的通信。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块470与无线通信模块460可以通过PCM总线接口耦合。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。
MIPI接口可以被用于连接处理器410与显示屏494,摄像头493等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器410和摄像头493通过CSI接口通信,实现电子设备400的拍摄功能。处理器410和显示屏494通过DSI接口通信,实现电子设备400的显示功能。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器410与摄像头493,显示屏494,无线通信模块460,音频模块470,传感器模块480等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。
USB接口430是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口430可以用于连接充电器为电子设备400充电,也可以用于电子设备400与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备400的结构限定。在本申请另一些实施例中,电子设备400也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块440用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块440可以通过USB接口430接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块440可以通过电子设备400的无线充电线圈接收无线充电输入。充电管理模块440为电池442充电的同时,还可以通过电源管理模块441为电子设备供电。
电源管理模块441用于连接电池442,充电管理模块440与处理器410。电源管理模块441接收电池442和/或充电管理模块440的输入,为处理器410,内部存储器421,外部存储器,显示屏494,摄像头493,和无线通信模块460等供电。电源管理模块441还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。
在其他一些实施例中,电源管理模块441也可以设置于处理器410中。在另一些实施例中,电源管理模块441和充电管理模块440也可以设置于同一个器件中。
电子设备400的无线通信功能可以通过天线1,天线2,移动通信模块450,无线通信模块460,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备400中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块450可以提供应用在电子设备400上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块450可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块450可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块450还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。
在一些实施例中,移动通信模块450的至少部分功能模块可以被设置于处理器410中。在一些实施例中,移动通信模块450的至少部分功能模块可以与处理器410的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器470A,受话器470B等)输 出声音信号,或通过显示屏494显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器410,与移动通信模块450或其他功能模块设置在同一个器件中。
无线通信模块460可以提供应用在电子设备400上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块460可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块460经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器410。无线通信模块460还可以从处理器410接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备400的天线1和移动通信模块450耦合,天线2和无线通信模块460耦合,使得电子设备400可以通过无线通信技术与网络以及其他设备通信。无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
电子设备400通过GPU,显示屏494,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏494和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器410可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏494用于显示图像,视频等。显示屏494包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备400可以包括1个或N个显示屏494,N为大于1的正整数。
电子设备400可以通过ISP,摄像头493,视频编解码器,GPU,显示屏494以及应用处理器等实现拍摄功能。
ISP用于处理摄像头493反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头493中。
摄像头493用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体 (complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备400可以包括1个或N个摄像头493,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备400在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备400可以支持一种或多种视频编解码器。这样,电子设备400可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备400的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
在本申请实施例中,NPU或其他处理器可以用于对电子设备400存储的视频中的人脸图像进行人脸检测、人脸跟踪、人脸特征提取和图像聚类等操作;对电子设备400存储的图片中的人脸图像进行人脸检测、人脸特征提取等操作,并根据图片的人脸特征以及视频中人脸图像的聚类结果,对电子设备400存储的图片进行聚类。
外部存储器接口420可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备400的存储能力。外部存储卡通过外部存储器接口420与处理器410通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器421可以用于存储计算机可执行程序代码,可执行程序代码包括指令。处理器410通过运行存储在内部存储器421的指令,从而执行电子设备400的各种功能应用以及数据处理。内部存储器421可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)。存储数据区可存储电子设备400使用过程中所创建的数据(比如音频数据,电话本等)。
此外,内部存储器421可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
电子设备400可以通过音频模块470,扬声器470A,受话器470B,麦克风470C,耳机接口470D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块470用于将数字音频信号转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块470还可以用于对音频信号编码和解码。在一些实施例中,音频模块470可以设置于处理器410中,或将音频模块470的部分功能模块设置于处理器410中。
扬声器470A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备400可以通过扬声器470A收听音乐,或收听免提通话。
受话器470B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备400接听电话或语音信息时,可以通过将受话器470B靠近人耳接听语音。
麦克风4270C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风470C发声,将声音信号输入到麦克风470C。电子设备400可以设置至少一个麦克风470C。在另一些实施例中,电子设备400可以设置两个麦克风470C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备400还可以设置三个,四个或更多麦克风470C。
耳机接口470D用于连接有线耳机。耳机接口470D可以是USB接口430,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
压力传感器480A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器480A可以设置于显示屏494。压力传感器480A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器480A,电极之间的电容改变。电子设备400根据电容的变化确定压力的强度。当有触摸操作作用于显示屏494,电子设备400根据压力传感器480A检测触摸操作强度。电子设备400也可以根据压力传感器480A的检测信号计算触摸的位置。
陀螺仪传感器480B可以用于确定电子设备400的运动姿态。在一些实施例中,可以通过陀螺仪传感器480B确定电子设备400围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器480B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器480B检测电子设备400抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备400的抖动,实现防抖。陀螺仪传感器480B还可以用于导航,体感游戏场景。
气压传感器480C用于测量气压。在一些实施例中,电子设备400通过气压传感器480C测得的气压值计算海拔高度,辅助定位和导航。
磁传感器480D包括霍尔传感器。电子设备400可以利用磁传感器480D检测翻盖皮套的开合。在一些实施例中,当电子设备400是翻盖机时,电子设备400可以根据磁传感器480D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。
加速度传感器480E可检测电子设备400在各个方向上(一般为三轴)加速度的大小。当电子设备400静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。
距离传感器480F,用于测量距离。电子设备400可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备400可以利用距离传感器480F测距以实现快速对焦。
接近光传感器480G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备400通过发光二极管向外发射红外光。电子设备400使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备400附近有物体。当检测到不充分的反射光时,电子设备400可以确定电子设备400附近没有物体。电子设备400可以利用接近光传感器480G检测用户手持电子设备400贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器480G也可用 于皮套模式,口袋模式自动解锁与锁屏。
环境光传感器480L用于感知环境光亮度。电子设备400可以根据感知的环境光亮度自适应调节显示屏494亮度。环境光传感器480L也可用于拍照时自动调节白平衡。环境光传感器480L还可以与接近光传感器480G配合,检测电子设备400是否在口袋里,以防误触。
指纹传感器480H用于采集指纹。电子设备400可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器480J用于检测温度。在一些实施例中,电子设备400利用温度传感器480J检测的温度,执行温度处理策略。例如,当温度传感器480J上报的温度超过阈值,电子设备400执行降低位于温度传感器480J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备400对电池442加热,以避免低温导致电子设备400异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备400对电池442的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器480K,也称“触控面板”。触摸传感器480K可以设置于显示屏494,由触摸传感器480K与显示屏494组成触摸屏,也称“触控屏”。触摸传感器480K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏494提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器480K也可以设置于电子设备400的表面,与显示屏494所处的位置不同。
骨传导传感器480M可以获取振动信号。在一些实施例中,骨传导传感器480M可以获取人体声部振动骨块的振动信号。骨传导传感器480M也可以接触人体脉搏,接收血压跳动信号。
在一些实施例中,骨传导传感器480M也可以设置于耳机中,结合成骨传导耳机。音频模块470可以基于骨传导传感器480M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于骨传导传感器480M获取的血压跳动信号解析心率信息,实现心率检测功能。
按键490包括开机键,音量键等。按键490可以是机械按键。也可以是触摸式按键。电子设备400可以接收按键输入,产生与电子设备400的用户设置以及功能控制有关的键信号输入。
马达491可以产生振动提示。马达491可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏494不同区域的触摸操作,马达491也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器492可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口495用于连接SIM卡。SIM卡可以通过插入SIM卡接口495,或从SIM卡接口495拔出,实现和电子设备400的接触和分离。电子设备400可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口495可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口495可以同时插入多张卡。多张卡的类型可以相同,也可以 不同。SIM卡接口495也可以兼容不同类型的SIM卡。SIM卡接口495也可以兼容外部存储卡。电子设备400通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备400采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备400中,不能和电子设备400分离。
需要说明的是,如果第二设备为服务器,则该服务器包括处理器,以及通信接口。
在本申请实施例中,并未特别限定执行图像特征提取过程和图像特征分析过程的执行主体的具体结构,只要可以通过运行记录有本申请实施例的图像处理方法中的图像特征提取过程和/或图像特征分析过程的代码的程序,以根据本申请实施例的图像处理方法中的图像特征提取过程和/或图像特征分析过程进行通信即可。例如,本申请实施例提供的一种图像处理方法的执行主体可以是第一设备中能够调用程序并执行程序的功能模块,或者为应用于第一设备中的装置,例如,芯片;本申请实施例提供的一种图像处理方法的执行主体可以是第二设备中能够调用程序并执行程序的功能模块,或者为应用于第二设备中的装置,例如,芯片。
上述应用场景以及对应的示例中,以多个摄像装置(每个摄像装置分别加载一个特征提取网络模型)对应一个云平台完成图像处理任务为例进行说明。为了对本申请具有更清晰的理解,后续实施例以多个摄像装置中的一个摄像装置与对应的一个云平台完成图像处理任务为例进行描述,该摄像装置可以加载一个特征提取网络模型,也可以加载多个特征提取网络模型。
参见图5,图5为本申请实施例提供的第一设备和第二设备执行图像处理方法的示例图,该示例图中第一设备加载特征分析网络模型,第一设备通过特征分析网络模型提取待处理图像的特征信息,第一设备生成特征信息的标识信息,第一设备将特征信息和标识信息发送给第二设备,第二设备加载图像分类网络模型、目标检测网络模型和语义分割网络模型,第二设备识别接收到的特征信息的标识信息,并根据接收到的特征信息的标识信息,选择与标识信息对应的特征分析网络模型,将特征信息输入到选中的特征分析网络模型中,已完成相应的图像处理任务。
参见图6,图6为本申请实施例提供的一种图像处理方法的流程示意图,如图所示,该方法应用于第一设备,该方法包括:
步骤601,第一设备通过预先存储的至少一个特征提取网络模型提取待处理图像的特征信息。
在本申请实施例中,特征提取网络模型可以包括:VGG模型,ResNet模型,Inception模型等。还可以是上述列举的模型以外的其他特征提取网络模型,本申请实施例对此不做限制。
图像的特征信息包括第一设备中的特征提取网络模型对待处理图像进行处理后获得的特征图。
步骤602,第一设备对提取的特征信息进行标识,获得特征信息的标识信息。
本申请实施例中,第一设备中的特征提取网络模型用于提取图像的特征信息,第二设备中的特征分析网络模型用于基于图像的特征信息对图像进行相应的图像处理任务。然而,第二设备中可能存在多个图像分析网络模型以完成不同的图像处理任务。作为举例,第二设备中存在A模型、B模型和C模型,A模型用于根据图像的特征信息获得图像分类结果, B模型用于根据图像的特征信息获得目标检测结果,C模型用于根据图像的特征信息获得语义分割结果。在第一设备中的图像特征提取网络模型提取了图像的特征信息之后,可以根据将要进行的图像处理任务对图像的特征信息进行标识,这样就可以方便第二设备接收到图像的特征信息之后,根据该标识信息确定选择A模型、B模型还是C模型进行后续的图像处理任务。
实际应用中,可以根据将要执行的图像处理任务确定特征信息的标识信息,作为举例,第二设备中用于根据图像的特征信息获得图像分类结果的A模型的标识信息为00,第二设备中用于根据图像的特征信息获得目标检测结果的B模型的标识信息为01,第二设备中用于根据图像的特征信息获得语义分割结果的C模型的标识信息为11。第一设备中,可以根据待处理图像要进行的图像处理任务,选择合适的特征提取网络模型。然后,根据要进行的图像处理任务,为提取的特征信息进行标识。当然,实际应用中,也可以按照其他方式提取特征信息和对特征信息进行标识,具体可参照后续图7至图9中相关的描述。
当对待处理图像进行的图像处理任务为图像分类时,就可以将特征信息的标识信息设置为00,当对待处理图像进行的图像处理任务为目标检测时,就可以将特征信息的标识信息设置为01,当对待处理图像进行的图像处理任务为语义分割时,就可以将特征信息的标识信息设置为11。
实际应用中,还可能根据其他信息生成特征信息的标识信息,具体可参照后续描述。
需要说明,上述举例用“0”和“1”作为标识字符组成标识信息,实际应用中,还可以是其他形式的标识字符组成标识信息。
步骤603,第一设备向第二设备发送待处理图像的特征信息和特征信息的标识信息,以指示第二设备选择与标识信息对应的特征分析网络模型处理特征信息。
在本申请实施例中,第一设备和第二设备不为同一设备时,第一设备向第二设备发送待处理图像的特征信息和特征信息的标识信息,当第一设备和第二设备为同一设备时,可能存在如下情况:
情况1、特征提取网络模型位于第一处理器内,特征分析网络模型位于第二处理器内。
第一设备向第二设备发送待处理图像的特征信息和特征信息的标识信息包括:第一设备的第一处理器向第一设备的第二处理器发送待处理图像的特征信息和特征信息的标识信息。
情况2、特征提取网络模型和特征分析网络模型位于同一处理器内。
第一设备向第二设备发送待处理图像的特征信息和特征信息的标识信息包括:第一设备的特征提取功能模块向第一设备的特征分析功能模块发送待处理图像的特征信息和特征信息的标识信息,其中,特征提取功能模块中存储特征提取网络模型,特征分析功能模块中存储特征分析网络模型。
本申请实施例中第一设备通过对待处理图像的特征信息生成标识信息,以指示第二设备在接收到特征信息的标识信息之后,根据标识信息选择对应的特征分析网络模型以完成相应的图像处理任务,从而使得第二设备中特征分析网络模型获得的特征信息为匹配的特征信息,提高第二设备中多任务的图像处理效果较差的问题。
作为本申请另一实施例,第一设备可以通过以下方式获得特征信息的标识信息:
第一设备获得提取特征信息的特征提取网络模型的标识;
第一设备将提取特征信息的特征提取网络模型的标识作为特征信息的标识信息。
在本申请实施例中,第一设备中存储的特征提取网络模型为至少一个,为每个特征提取网络模型预先设置标识,根据获得特征信息的特征提取网络模型的标识可以生成特征信息的标识信息。由于一个第二设备可能与多个第一设备存在网络连接,即第二设备可能接收到多个第一设备发送的特征信息。因此,即使第一设备中仅存在一个特征提取网络模型,也需要为该特征提取网络模型设置唯一标识。
参见图7,作为该实施例的一个示例,如果第二设备中的图像处理任务对应设有三个模型,模型A、模型B和模型C,其中,模型A、模型B和模型C对应的图像处理任务对特征信息的要求各不相同。那么,为了满足模型A、模型B和模型C对特征信息的需求,第一设备中的特征提取网络模型的数量可以设置3个:模型1、模型2和模型3。模型1可以满足模型B的图像处理任务,模型2可以满足模型C的图像处理任务,模型3可以满足模型A的图像处理任务。当然,除了上述的满足关系外,可能还存在其它情况,例如模型3获得的特征信息除了可以满足上面描述的模型A的需求以外,还可能满足模型B的需求,然而模型3提取特征信息的过程相比模型1提取特征信息的过程内存占用较大,造成资源的浪费。这种情况下,为了避免内存占用较大,资源浪费的问题,将标识信息和特征分析网络模型的对应关系设置为图7所示的对应关系,即模型3提取的特征信息输入模型A,模型1提取的特征信息输入模型B。
当用户想要进行模型A对应的图像处理任务时,用户可以根据图7所示的对应关系,将待处理图像输入第一设备中的模型3,第一设备中的模型3输出待处理图像的特征信息以及特征信息的标识信息10,第一设备将待处理图像的特征信息和特征信息的标识信息10发送给第二设备,第二设备接收到特征信息以及对应的标识信息10后,第二设备根据对应关系,确定模型A为目标模型,将特征信息输入模型A,从而获得用户想要的图像处理结果。
作为本申请另一实施例,第一设备可以通过以下方式获得特征信息的标识信息:
第一设备获得特征信息的输出层级的标识,其中,特征信息的输出层级为提取特征信息的特征提取网络模型中输出特征信息的层级;
第一设备将特征信息的输出层级的标识作为特征信息的标识信息。
在本申请实施例中,特征提取网络模型可能存在多个层级,作为举例,特征提取网络模型的结构中可能存在多个卷积层、多个池化层和全连接层等,这些层级之间以卷积层、池化层、卷积层、池化层、……、全连接层的形式存在。可能这些层级之间存在以下关系:上一层级的输出为下一层级的输入,最后获得一个层级输出图像的特征信息。然而,实际应用中,某个特征分析网络模型不仅需要最后一个层级的输出,还可能需要中间某一个或者某多个层级的输出;可能某个特征分析网络模型不需要最后一个层级的输出,而是需要中间一个或多个层级的输出。所以,为了满足第二设备中的特征分析网络模型的需求,可以设置第一设备中特征提取网络模型的特定层级作为输出层级输出图像的特征信息;并根据输出特征信息的输出层级生成相应的标识信息,以便于第二设备选择合适的特征分析网络模型。
对于特征提取网络模型,能够输出图像的特征信息的层级(例如上述示例中的卷积层、池化层和全连接层)均可以作为特征信息的输出层级。
对于上述特征提取网络模型的示例,仅用于说明特征提取网络模型中的输出层级。上述特征提取网络模型的示例并不表示对特征提取网络模型的结构造成限制,例如,上述特征提取网络模型也可以是VGG、DenseNet、还可以是特征金字塔结构的特征提取网络模型。
参见图8,作为该实施例的一个示例,如果第二设备中的图像处理任务对应设有2个特征分析网络模型:模型A和模型B,其中,模型A对应的图像处理任务对特征信息的要求为:模型1的输出层级2至输出层级5输出的特征信息,模型B对应的图像处理任务对特征信息的要求为:模型1的输出层级3至输出层级5输出的特征信息。那么,第一设备中的特征提取网络模型存在4个输出层级:输出层级2、输出层级3、输出层级4和输出层级5。可以将模型1的输出层级2至输出层级5的特征信息对应的标识信息设置为0,将模型1输出层级3至输出层级5的特征信息对应的标识信息设置为1。
当然,实际应用中,第一设备内还可以包含其他特征提取网络模型。本申请实施例仅采用模型1说明如何将输出层级的标识作为特征信息的标识信息。
作为本申请另一实施例,第一设备中至少存在两个特征提取网络模型、且其中至少一个特征提取网络模型包含多个输出层级时,第一设备可以通过以下方式获得特征信息的标识信息:
第一设备获得提取特征信息的特征提取网络模型的标识;
第一设备获得特征信息的输出层级的标识,其中,特征信息的输出层级为提取特征信息的特征提取网络模型中输出特征信息的层级;
第一设备将提取特征信息的特征提取网络模型的标识和特征信息的输出层级的标识作为特征信息的标识信息。
参见图9,作为本实施例的一个示例,第一设备中存在两个特征提取网络模型:模型1和模型2,模型1的标识可以为0,模型2的标识可以1。模型1对应存在1个输出层级;模型2对应存在4个输出层级:输出层级2、输出层级3、输出层级4和输出层级5。模型2的输出层级2至输出层级4的标识为0,模型2的输出层级3至输出层级5的标识为1,相应的,模型1得到的特征信息的标识信息为0X,(可以是00,也可以是01),模型2的输出层级2至输出层级4得到的特征信息的标识信息为10,模型2的输出层级3至输出层级5得到的特征信息的标识信息为11,预先基于第二设备中的每个特征分析网络模型对特征信息的需求,存在如下对应关系:00—模型A、10—模型B、11—模型C,或者01—模型A、10—模型B、11—模型C。
当用户需要对待处理图像进行模型B对应的图像处理任务时,用户查找对应关系,确定需要第一设备中的模型2的输出层级2至输出层级4得到的特征信息,用户可以在第一设备侧控制模型2的输出层级2至输出层级4输出特征信息,用户也可以在第二设备侧通过第二设备向第一设备发送指令,以控制第一设备中的模型2的输出层级2至输出层级4输出特征信息。第一设备中的模型2的输出层级2至输出层级4输出特征信息之后,可以生成特征信息的提取模型的标识1,生成特征信息的输出层级的标识0,将10作为特征信息的标识信息,第一设备将特征信息以及标识信息10发送给第二设备,第二设备根据标识信息10,将特征信息输入模型B中,获得图像处理结果。
在实际应用中,无论是将提取特征信息的特征提取网络模型的标识作为特征信息的标识信息,还是将特征信息的输出层级的标识作为特征信息的标识信息,还是将二者均作为 特征信息的标识信息,均可以根据实际情况进行设定。
作为本申请另一实施例,为了对特征信息的标识信息具有更清晰的理解,可以设定如下规则:
对标识信息对应的字段进行划分:用于表示提取特征信息的特征提取网络模型的第一字段和用于表示特征信息的输出层级的第二字段。
其中,第一字段占据m个字节,第二字段占据n个字节。实际应用中可以根据第一设备中的特征提取网络模型的数量确定m的取值,根据特征提取网络模型的输出形式的数量确定n的取值。
为了使得本申请实施例在实施过程中具有可扩展性,可以将m和n设置的相对较大,例如将m设置为4个字节,最多可以涵盖2 4个特征提取网络模型。将n设置为4个字节,可以涵盖最多2 4种输出形式的特征提取网络模型。输出形式表示输出特征信息的输出层级组成的集合,例如一个特征提取网络模型对应的输出层级组成的集合{输出层级1}、{输出层级3}、{输出层级2至输出层级4}和{输出层级3至输出层级5}为4个不同的输出形式。实际应用中,m和n还可以为其他数值,本申请对此不做限制。在第二设备侧存在较多的特征分析网络模型的情况下,存在足够的标识信息与特征分析网络模型形成一一对应关系。
第一设备向第二设备发送特征信息的标识信息时,可以将第一字段和第二字段作为一个整体字段,该整体字段中存在m个连续的字节表示提取特征信息的特征提取网络模型,存在n个连续的字节表示特征信息的输出层级。当然,该整体字段中还可以包含表征其他含义的字节,本申请实施例对整体字段的总字节数不做限制。
上述示例仅用于举例,实际应用中,第一字段和第二字段可以作为两个完全独立的字段,例如,第一个独立的字段中包括第一字段,还可以包括至少1个字节用于区分当前独立的字段是提取模型的标识还是输出层级的标识。第二个独立的字段中包括第二字段,还可以包括至少一个字节用于区分当前独立的字段是提取模型的标识还是输出层级的标识。第一设备将第一个独立的字段和第二个独立的字段作为特征信息的标识信息发送给第二设备。
作为举例,第一个独立的字段中第一个字节为0,表示该独立的字段中存储的m个连续的字节为提取模型的标识,该独立的字段中还包括m个连接的字节用于表示提取特征信息的特征提取网络模型。第二个独立的字段中第一个字节为1,表示该独立的字段中存储的n个连续的字节为输出层级的标识,该独立的字段中还包括n个连接的字节用于表示特征信息的输出层级。
上述标识信息的生成方法仅用于示例,在实际应用中,还可以是其他的生成方式,本申请实施例对此不做限制。
参见图10,图10为本申请实施例提供的一种图像处理方法的流程示意图,如图所示,该方法应用于第二设备,该方法包括:
步骤1001,第二设备获取与所述第二设备连接的第一设备发送的待处理图像的特征信息和特征信息的标识信息。
步骤1002,第二设备根据特征信息的标识信息确定处理所述特征信息的特征分析网络模型。
步骤1003,第二设备将待处理图像的特征信息输入确定的特征分析网络模型,获得图 像处理结果。
在本申请实施例中,第二设备需要和第一设备配合完成图像处理任务,第一设备完成图像特征提取任务,第二设备对第一设备提取的特征信息进行分析,已获得图像处理结果。因此,第一设备和第二设备之间存在连接,可以是有线连接,也可以是无线连接。如前所述,第一设备提取的特征信息存在标识信息,且标识信息用于指示第二设备选择合适的特征分析网络模型完成对应的图像处理任务。第二设备获取到待处理图像的特征信息的标识信息后,需要根据标识信息确定处理特征信息的特征分析网络模型。
对应于图7、8、9所示实施例中获得特征信息的标识信息的方法,第二设备获取的特征信息的标识信息包括:
提取特征信息的特征提取网络模型的标识;
和/或,
特征信息的输出层级的标识,其中,特征信息的输出层级为提取特征信息的特征提取网络模型中输出特征信息的层级。
作为本申请另一实施例,第二设备根据特征信息的标识信息确定处理特征信息的特征分析网络模型包括:
第二设备获取标识信息与特征分析网络模型的对应关系;
第二设备根据对应关系,将与特征信息的标识信息对应的特征分析网络模型作为处理特征信息的特征分析网络模型。
需要说明,所述对应关系中不仅可以包括标识信息、特征分析网络模型之间的对应关系,还可以包括其他信息的对应关系。例如图3所示示例中,描述的对应关系为第一设备、第一设备种存储的特征提取网络模型、标识信息、图像处理任务和第二设备中存储的特征分析网络模型之间的对应关系。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
本申请实施例可以根据上述方法示例对第一设备和第二设备进行功能单元的划分,例如,可以对应每一个功能划分每一个功能单元,也可以将两个或两个以上的功能集成在一个处理单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。需要说明的是,本申请实施例中对单元的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。下面以采用对应每一个功能划分每一个功能单元为例进行说明:
参照图11,该第一设备1110包括:
特征信息提取单元1111,用于通过预先存储的至少一个特征提取网络模型提取待处理图像的特征信息;
标识信息生成单元1112,用于对提取的特征信息进行标识,获得特征信息的标识信息;
信息发送单元1113,用于向第二设备发送待处理图像的特征信息和特征信息的标识信息,以指示第二设备选择与标识信息对应的特征分析网络模型处理特征信息。
作为本申请另一实施例,标识信息生成单元1112还用于:
获得提取特征信息的特征提取网络模型的标识;将提取特征信息的特征提取网络模型的标识作为特征信息的标识信息。
作为本申请另一实施例,标识信息生成单元1112还用于:
获得特征信息的输出层级的标识,其中,特征信息的输出层级为提取特征信息的特征提取网络模型中输出特征信息的层级;将特征信息的输出层级的标识作为特征信息的标识信息。
作为本申请另一实施例,标识信息生成单元1112还用于:
获得提取特征信息的特征提取网络模型的标识;
获得特征信息的输出层级的标识,其中,特征信息的输出层级为提取特征信息的特征提取网络模型中输出特征信息的层级;
将提取特征信息的特征提取网络模型的标识和特征信息的输出层级的标识作为特征信息的标识信息。
需要说明的是,上述第一设备中单元之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其具体功能及带来的技术效果,具体可参见方法实施例部分,此处不再赘述。
参加图11,该第二设备1120包括:
信息获取单元1121,用于获取连接的第一设备发送的待处理图像的特征信息和特征信息的标识信息;
模型确定单元1122,用于根据特征信息的标识信息确定处理该特征信息的特征分析网络模型;
图像处理单元1123,用于将待处理图像的特征信息输入确定的特征分析网络模型,获得图像处理结果。
作为本申请另一实施例,模型确定单元1122还用于:
获取标识信息与特征分析网络模型的对应关系;根据对应关系,将与特征信息的标识信息对应的特征分析网络模型作为处理该特征信息的特征分析网络模型。
作为本申请另一实施例,特征信息的标识信息包括:
提取特征信息的特征提取网络模型的标识;
和/或,
特征信息的输出层级的标识,其中,特征信息的输出层级为提取特征信息的特征提取网络模型中输出特征信息的层级。
需要说明的是,上述第二设备中单元之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其具体功能及带来的技术效果,具体可参见方法实施例部分,此处不再赘述。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元完成,即将第一设备的内部结构划分成不同的功能单元,以完成以上描述的全部或者部分功能。实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
本申请实施例还提供了一种计算机可读存储介质,计算机可读存储介质存储有计算机程序,计算机程序被处理器执行时可实现上述各个方法实施例中的步骤。
本申请实施例还提供了一种计算机程序产品,当计算机程序产品在第一设备上运行时,使得第一设备可实现上述各个方法实施例中的步骤。
集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,可以通过计算机程序来指令相关的硬件来完成,的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,计算机程序包括计算机程序代码,计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。计算机可读介质至少可以包括:能够将计算机程序代码携带到第一设备的任何实体或装置、记录介质、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质。例如U盘、移动硬盘、磁碟或者光盘等。在某些司法管辖区,根据立法和专利实践,计算机可读介质不可以是电载波信号和电信信号。
本申请实施例还提供了一种芯片系统,芯片系统包括处理器,处理器与存储器耦合,处理器执行存储器中存储的计算机程序,以实现本申请任一方法实施例的步骤。芯片系统可以为单个芯片,或者多个芯片组成的芯片模组。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及方法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
最后应说明的是:以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (11)

  1. 一种图像处理方法,其特征在于,包括:
    第一设备通过预先存储的至少一个特征提取网络模型提取待处理图像的特征信息;
    所述第一设备对提取的所述特征信息进行标识,获得所述特征信息的标识信息;
    所述第一设备向第二设备发送所述待处理图像的特征信息和所述特征信息的标识信息,以指示所述第二设备选择与所述标识信息对应的特征分析网络模型处理所述特征信息。
  2. 如权利要求1所述的方法,其特征在于,所述第一设备对提取的所述特征信息进行标识,获得所述特征信息的标识信息包括:
    所述第一设备获得提取所述特征信息的特征提取网络模型的标识;
    所述第一设备将提取所述特征信息的特征提取网络模型的标识作为所述特征信息的标识信息。
  3. 如权利要求1所述的方法,其特征在于,所述第一设备对提取的所述特征信息进行标识,获得所述特征信息的标识信息包括:
    所述第一设备获得所述特征信息的输出层级的标识,其中,所述特征信息的输出层级为提取所述特征信息的特征提取网络模型中输出所述特征信息的层级;
    所述第一设备将所述特征信息的输出层级的标识作为所述特征信息的标识信息。
  4. 如权利要求1所述的方法,其特征在于,所述第一设备对提取的所述特征信息进行标识,获得所述特征信息的标识信息包括:
    所述第一设备获得提取所述特征信息的特征提取网络模型的标识;
    所述第一设备获得所述特征信息的输出层级的标识,其中,所述特征信息的输出层级为提取所述特征信息的特征提取网络模型中输出所述特征信息的层级;
    所述第一设备将提取所述特征信息的特征提取网络模型的标识和所述特征信息的输出层级的标识作为所述特征信息的标识信息。
  5. 一种图像处理方法,其特征在于,包括:
    第二设备获取与所述第二设备连接的第一设备发送的待处理图像的特征信息和所述特征信息的标识信息;
    所述第二设备根据所述特征信息的标识信息确定处理所述特征信息的特征分析网络模型;
    所述第二设备将所述待处理图像的特征信息输入确定的特征分析网络模型,获得图像处理结果。
  6. 如权利要求5所述的方法,其特征在于,所述第二设备根据所述特征信息的标识信息确定处理所述特征信息的特征分析网络模型包括:
    所述第二设备获取所述标识信息与所述特征分析网络模型的对应关系;
    所述第二设备根据所述对应关系,将与所述特征信息的标识信息对应的特征分析网络模型作为处理所述特征信息的特征分析网络模型。
  7. 如权利要求5所述的方法,其特征在于,所述特征信息的标识信息包括:
    提取所述特征信息的特征提取网络模型的标识;
    和/或,
    所述特征信息的输出层级的标识,其中,所述特征信息的输出层级为提取所述特征信息的特征提取网络模型中输出所述特征信息的层级。
  8. 一种电子设备,其特征在于,所述电子设备包括处理器,所述处理器用于运行存储器中存储的计算机程序,以实现如权利要求1至4任一项所述的方法。
  9. 一种电子设备,其特征在于,所述电子设备包括处理器,所述处理器用于运行存储器中存储的计算机程序,以实现如权利要求5至7任一项所述的方法。
  10. 一种图像处理系统,其特征在于,包括:至少一个如权利要求8所述的电子设备和至少一个如权利要求9所述的电子设备。
  11. 一种芯片系统,其特征在于,所述芯片系统包括处理器,所述处理器与存储器耦合,所述处理器用于运行所述存储器中存储的计算机程序,以实现如权利要求1至4任一项所述的方法和/或如权利要求5至7任一项所述的方法。
PCT/CN2021/107246 2020-07-28 2021-07-20 一种图像处理方法、电子设备、图像处理系统及芯片系统 WO2022022319A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US18/007,143 US20230230343A1 (en) 2020-07-28 2021-07-20 Image Processing Method, Electronic Device, Image Processing System, and Chip System
EP21850187.2A EP4181016A4 (en) 2020-07-28 2021-07-20 IMAGE PROCESSING METHOD AND SYSTEM, ELECTRONIC DEVICE AND CHIP SYSTEM

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010742689.0 2020-07-28
CN202010742689.0A CN114005016A (zh) 2020-07-28 2020-07-28 一种图像处理方法、电子设备、图像处理系统及芯片系统

Publications (1)

Publication Number Publication Date
WO2022022319A1 true WO2022022319A1 (zh) 2022-02-03

Family

ID=79920791

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/107246 WO2022022319A1 (zh) 2020-07-28 2021-07-20 一种图像处理方法、电子设备、图像处理系统及芯片系统

Country Status (4)

Country Link
US (1) US20230230343A1 (zh)
EP (1) EP4181016A4 (zh)
CN (1) CN114005016A (zh)
WO (1) WO2022022319A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20240104147A (ko) * 2021-11-04 2024-07-04 커먼웰쓰 사이언티픽 앤 인더스트리알 리서치 오거니제이션 객체 인식
US11972265B2 (en) * 2022-04-22 2024-04-30 Red Hat, Inc. Parallel booting operating system
TWI852002B (zh) 2022-04-28 2024-08-11 緯創資通股份有限公司 為光學相機通訊校正幀的方法和電子裝置
CN117710695A (zh) * 2023-08-02 2024-03-15 荣耀终端有限公司 图像数据的处理方法及电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7395190B2 (en) * 2001-10-22 2008-07-01 Hitachi, Ltd. Analytical model producing method and analytical model producing apparatus
CN104281843A (zh) * 2014-10-20 2015-01-14 上海电机学院 基于自适应特征和分类模型选择的图像识别方法及系统
CN106326632A (zh) * 2015-07-01 2017-01-11 联发科技股份有限公司 物体分析方法与物体分析系统
CN110288085A (zh) * 2019-06-20 2019-09-27 厦门市美亚柏科信息股份有限公司 一种数据处理方法、装置、系统及存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564762A (zh) * 2018-06-11 2018-09-21 南昌航空大学 一种基于分布式计算的森林烟火识别智能云系统
CN108921300A (zh) * 2018-06-21 2018-11-30 第四范式(北京)技术有限公司 执行自动机器学习的方法和装置
CN108960209B (zh) * 2018-08-09 2023-07-21 腾讯科技(深圳)有限公司 身份识别方法、装置及计算机可读存储介质
CN109194926A (zh) * 2018-10-19 2019-01-11 济南浪潮高新科技投资发展有限公司 一种基于边缘计算的城市安防系统及其检测方法
CN110782015B (zh) * 2019-10-25 2024-10-15 腾讯科技(深圳)有限公司 神经网络的网络结构优化器的训练方法、装置及存储介质
CN111444863B (zh) * 2020-03-30 2023-05-23 华南理工大学 基于相机的5g车载网络云辅助的驾驶员情绪识别方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7395190B2 (en) * 2001-10-22 2008-07-01 Hitachi, Ltd. Analytical model producing method and analytical model producing apparatus
CN104281843A (zh) * 2014-10-20 2015-01-14 上海电机学院 基于自适应特征和分类模型选择的图像识别方法及系统
CN106326632A (zh) * 2015-07-01 2017-01-11 联发科技股份有限公司 物体分析方法与物体分析系统
CN110288085A (zh) * 2019-06-20 2019-09-27 厦门市美亚柏科信息股份有限公司 一种数据处理方法、装置、系统及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4181016A4

Also Published As

Publication number Publication date
US20230230343A1 (en) 2023-07-20
EP4181016A4 (en) 2023-12-13
EP4181016A1 (en) 2023-05-17
CN114005016A (zh) 2022-02-01

Similar Documents

Publication Publication Date Title
WO2021213120A1 (zh) 投屏方法、装置和电子设备
CN110347269B (zh) 一种空鼠模式实现方法及相关设备
WO2022022319A1 (zh) 一种图像处理方法、电子设备、图像处理系统及芯片系统
CN112492193B (zh) 一种回调流的处理方法及设备
CN112651510B (zh) 模型更新方法、工作节点及模型更新系统
CN113542580B (zh) 去除眼镜光斑的方法、装置及电子设备
CN112312366A (zh) 一种通过nfc标签实现功能的方法、电子设备及系统
CN113938720A (zh) 多设备协作的方法、电子设备及多设备协作系统
CN114422340A (zh) 日志上报方法、电子设备及存储介质
CN114915721A (zh) 建立连接的方法与电子设备
CN114466449A (zh) 一种位置特征获取方法及电子设备
CN113490291A (zh) 数据下载方法、装置和终端设备
US20240171826A1 (en) Volume adjustment method and system, and electronic device
CN112527220A (zh) 一种电子设备显示方法及电子设备
CN109285563B (zh) 在线翻译过程中的语音数据处理方法及装置
CN114661258A (zh) 自适应显示方法、电子设备及存储介质
WO2022214004A1 (zh) 一种目标用户确定方法、电子设备和计算机可读存储介质
CN114120987B (zh) 一种语音唤醒方法、电子设备及芯片系统
CN111339513B (zh) 数据分享的方法和装置
CN117319369A (zh) 文件投送方法、电子设备及存储介质
CN115393676A (zh) 手势控制优化方法、装置、终端和存储介质
CN114765768A (zh) 一种网络选择方法及设备
CN114466238A (zh) 帧解复用方法、电子设备及存储介质
CN114332331A (zh) 一种图像处理方法和装置
CN114079694B (zh) 控件标注方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21850187

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021850187

Country of ref document: EP

Effective date: 20230209

NENP Non-entry into the national phase

Ref country code: DE