Nothing Special   »   [go: up one dir, main page]

CN115840654A - Message processing method, system, computing device and readable storage medium - Google Patents

Message processing method, system, computing device and readable storage medium Download PDF

Info

Publication number
CN115840654A
CN115840654A CN202310103617.5A CN202310103617A CN115840654A CN 115840654 A CN115840654 A CN 115840654A CN 202310103617 A CN202310103617 A CN 202310103617A CN 115840654 A CN115840654 A CN 115840654A
Authority
CN
China
Prior art keywords
message
thread
semaphore
memory area
communication
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310103617.5A
Other languages
Chinese (zh)
Other versions
CN115840654B (en
Inventor
王森莽
李铁平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Superred Technology Co Ltd
Original Assignee
Beijing Superred Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Superred Technology Co Ltd filed Critical Beijing Superred Technology Co Ltd
Priority to CN202310103617.5A priority Critical patent/CN115840654B/en
Publication of CN115840654A publication Critical patent/CN115840654A/en
Application granted granted Critical
Publication of CN115840654B publication Critical patent/CN115840654B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Multi Processors (AREA)

Abstract

The invention relates to the technical field of interprocess communication, and discloses a method, a system, a computing device and a readable storage medium for processing messages, wherein the method is executed in a consumption end of the computing device and comprises the following steps: establishing a memory area shared by a consumption end and a production end, and initializing semaphores for interprocess communication between the consumption end and the production end and semaphores for interprocess communication between a plurality of message processing threads of the consumption end; creating and starting a plurality of message processing threads; and processing the message written into the memory area by the production end through a plurality of message processing threads according to the incoming parameters, wherein the incoming parameters comprise the semaphore of interprocess communication and the semaphore of interprocess communication. The technical scheme of the invention improves the interprocess communication speed between the production end and the consumption end.

Description

Message processing method, system, computing device and readable storage medium
Technical Field
The present invention relates to the field of interprocess communication technologies, and in particular, to a method, a system, a computing device, and a readable storage medium for processing a message.
Background
Inter-process communication is to propagate or exchange information between different processes, and generally, the data volume of information transferred by inter-process communication is low, and the communication rate is also low. Currently, there is a scheme for performing message processing unit by unit in interprocess communication, but this scheme requires a complex boundary processing algorithm. The scheme of interprocess communication also adopts a non-open source third-party library, but the mode of adopting the third-party library has low flexibility, poor stability and inconvenient maintenance.
Therefore, the invention provides a message processing scheme to solve the problems in the prior art.
Disclosure of Invention
To this end, the present invention provides a method, system, computing device and readable storage medium for processing a message to solve or at least alleviate the above-identified problems.
According to a first aspect of the present invention, there is provided a method for processing a message, which is executed in a consuming side of a computing device, the method including: creating a memory area shared by a consumption end and a production end, and initializing semaphore for interprocess communication between the consumption end and the production end and semaphore for interprocess communication between a plurality of message processing threads of the consumption end; creating and starting a plurality of message processing threads; and processing the message written into the memory area by the production end through a plurality of message processing threads according to the incoming parameters, wherein the incoming parameters comprise the semaphore of interprocess communication and the semaphore of interprocess communication.
Optionally, in the method for processing a message according to the present invention, the processing a message written in the memory area by the producer includes: acquiring the length of the message stored in the memory area through a message acquisition thread; acquiring the message stored in the memory area according to the length of the message stored in the memory area; storing the message into a first cache region shared by a message acquisition thread and a message analysis thread; updating the semaphore of the inter-thread communication so that the message analysis thread splits the message stored in the first cache region according to the updated semaphore of the inter-thread communication; when the length of the message stored in the memory area is zero, monitoring the memory area; and responding to the change of the semaphore of the interprocess communication, and continuing to execute the steps from the step of acquiring the length of the message stored in the memory area.
Optionally, in the method for processing a message according to the present invention, the plurality of message processing threads further include a message consuming thread, which processes a message written in the memory area by the producer, and further includes: judging whether the updated semaphore of the communication between the threads counts the message stored in the memory area or not through the message analysis thread; if so, splitting the message stored in the first cache region, storing the split message into a second cache region shared by the message analysis thread and the message consumption thread, counting the split message stored in the second cache region, and updating the updated semaphore of the inter-thread communication again, so that the message consumption thread performs predetermined processing on the split message stored in the second cache region according to the updated semaphore of the inter-thread communication; otherwise, suspending the message analysis thread and releasing the system resources occupied by the message analysis thread.
Optionally, in the method for processing a message according to the present invention, processing a message written in a memory area by a producer further includes: judging whether the semaphore of the communication between the threads which is updated again is counted to the message stored in the memory area or not through the message consumption thread; if yes, the split message stored in the second cache region is subjected to preset processing; otherwise, the message consumption thread is suspended, and the system resources occupied by the message consumption thread are released.
Optionally, in the method for processing a message according to the present invention, the plurality of message processing threads further include a message cache length monitoring thread, and process a message written in the memory area by the producer, further including: storing the messages stored in the memory area into a third cache area shared by the message acquisition thread and the message cache length monitoring thread through the message acquisition thread; printing the length of the message stored in the third cache region through a message cache length monitoring thread; carrying out sleep processing for a preset time length on the message cache length monitoring thread; and after the sleep of the message cache length monitoring thread is finished, monitoring the length of the message stored in the third cache region.
Optionally, in the message processing method according to the present invention, the method further includes: the size of the largest data structure unit of the computing device is obtained, and the size of the memory area is adjusted to be integral multiple of the size of the largest data structure unit.
Optionally, in the message processing method according to the present invention, the method further includes: and in the memory area, creating a circular queue structure variable shared by the consumption end and the production end, and mapping the memory area of the target file so as to process the read-write process of the message through the circular queue structure variable and record the message through the target file.
Optionally, in the method for processing a message according to the present invention, the method further includes: a plurality of message processing threads are associated with a main thread of a consumer.
Optionally, in the message processing method according to the present invention, the method further includes: in response to the end of the plurality of message processing threads, the mapping of the storage area of the target file is canceled and the storage area is deleted.
Alternatively, in the message processing method according to the present invention, the predetermined processing includes printing.
According to a second aspect of the present invention, there is provided a method of processing a message, performed in a production side of a computing device, the method comprising: initializing a memory area shared by a production end and a consumption end and a semaphore for interprocess communication between the consumption end and the production end; acquiring a memory unit of a memory area; and writing the message into the memory unit through multiple threads, and updating the semaphore of interprocess communication so that the consumption end processes the message according to any one of the message processing methods.
According to a third aspect of the present invention, there is provided a message processing system, comprising a consuming side and a producing side, wherein: the consumer-side is adapted to: establishing a memory area shared by a consumption end and a production end, and initializing semaphores for interprocess communication between the consumption end and the production end and semaphores for interprocess communication between a plurality of message processing threads of the consumption end; creating and starting a plurality of message processing threads; processing the message written into the memory area by the production end through a plurality of message processing threads according to incoming parameters, wherein the incoming parameters comprise the semaphore of interprocess communication and the semaphore of interprocess communication; the production end is adapted to: initializing a memory area shared by a production end and a consumption end and a semaphore for interprocess communication between the consumption end and the production end; acquiring a memory unit of a memory area; messages are written to the memory unit through multithreading and the semaphore for interprocess communication is updated.
According to a fourth aspect of the invention, there is provided a computing device comprising: at least one processor; a memory storing program instructions configured to be suitable for execution by the at least one processor, the program instructions comprising instructions for performing the method as described above.
According to a fifth aspect of the present invention, there is provided a readable storage medium storing program instructions which, when read and executed by a computing device, cause the computing device to perform the method as described above.
According to the technical scheme, the memory area shared by the production end and the consumption end is created, so that the production end and the consumption end can access the memory area. The production end writes the message into the memory area, and the consumption end processes the message in the memory area, so that the production end and the consumption end do not need to directly transmit the message, but transmit the message in a mode of accessing the shared memory area, and the inter-process communication speed between the production end and the consumption end is improved. The invention also processes the message through a plurality of message processing threads, thereby improving the processing speed of the message and further improving the interprocess communication speed between the production end and the consumption end.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
To the accomplishment of the foregoing and related ends, certain illustrative aspects are described herein in connection with the following description and the annexed drawings, which are indicative of various ways in which the principles disclosed herein may be practiced, and all aspects and equivalents thereof are intended to be within the scope of the claimed subject matter. The above and other objects, features and advantages of the present disclosure will become more apparent from the following detailed description read in conjunction with the accompanying drawings. Throughout this disclosure, like reference numerals generally refer to like parts or elements.
FIG. 1 shows a block diagram of the physical components (i.e., hardware) of a computing device 100;
FIG. 2 illustrates a flow diagram of a method 200 of processing a message according to one embodiment of the invention;
FIG. 3 shows a flow diagram of a method 300 of processing a message according to another embodiment of the invention;
FIG. 4 illustrates a flow diagram for a message fetch thread processing a message in accordance with one embodiment of the present invention;
FIG. 5 illustrates a flow diagram for a message parsing thread to process a message in accordance with one embodiment of the invention;
FIG. 6 illustrates a flow diagram for a message consuming thread processing a message according to one embodiment of the invention;
FIG. 7 illustrates a flow diagram for a message cache length snoop thread processing a message according to one embodiment of the present invention;
fig. 8 shows a schematic diagram of a system 800 for processing messages according to an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 illustrates a block diagram of the physical components (i.e., hardware) of a computing device 100. In a basic configuration, computing device 100 includes at least one processing unit 102 and system memory 104. According to one aspect, the processing unit 102 may be implemented as a processor depending on the configuration and type of computing device. The system memory 104 includes, but is not limited to, volatile storage (e.g., random access memory), non-volatile storage (e.g., read-only memory), flash memory, or any combination of such memories. According to one aspect, operating system 105 and program modules 106 are included in system memory 104, and a message processing system 800 is included in program modules 106, message processing system 800 being configured to perform message processing methods 200 and 300 of the present invention.
According to one aspect, the operating system 105 is, for example, adapted to control the operation of the computing device 100. Further, the examples are practiced in conjunction with a graphics library, other operating systems, or any other application program, and are not limited to any particular application or system. This basic configuration is illustrated in fig. 1 by those components within dashed line 108. According to one aspect, the computing device 100 has additional features or functionality. For example, according to one aspect, computing device 100 includes additional data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Such additional storage is illustrated in FIG. 1 by removable storage device 109 and non-removable storage device 110.
As stated hereinabove, according to one aspect, program modules are stored in the system memory 104. According to one aspect, the program modules may include one or more applications, the invention not being limited to the type of application, for example, the applications may include: email and contacts applications, word processing applications, spreadsheet applications, database applications, slide show applications, drawing or computer-aided applications, web browser applications, and the like.
According to one aspect, examples may be practiced in a circuit comprising discrete electronic elements, a packaged or integrated electronic chip containing logic gates, a circuit utilizing a microprocessor, or on a single chip containing electronic elements or microprocessors. For example, an example may be practiced via a system on a chip (SOC) in which each or many of the components shown in fig. 1 may be integrated on a single integrated circuit. According to one aspect, such SOC devices may include one or more processing units, graphics units, communication units, system virtualization units, and various application functions, all integrated (or "burned") onto a chip substrate as a single integrated circuit. When operating via an SOC, the functions described herein may be operated via application-specific logic integrated with other components of the computing device 100 on a single integrated circuit (chip). Embodiments of the invention may also be practiced using other technologies capable of performing logical operations (e.g., AND, OR, AND NOT), including but NOT limited to mechanical, optical, fluidic, AND quantum technologies. In addition, embodiments of the invention may be practiced within a general purpose computer or in any other circuits or systems.
According to one aspect, computing device 100 may also have one or more input devices 112, such as a keyboard, mouse, pen, voice input device, touch input device, or the like. Output device(s) 114 such as a display, speakers, printer, etc. may also be included. The foregoing devices are examples and other devices may also be used. Computing device 100 may include one or more communication connections 116 that allow communication with other computing devices 118. Examples of suitable communication connections 116 include, but are not limited to: RF transmitter, receiver and/or transceiver circuitry; universal Serial Bus (USB), parallel, and/or serial ports.
The term computer readable media as used herein includes computer storage media. Computer storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, or program modules. System memory 104, removable storage 109, and non-removable storage 110 are all examples of computer storage media (i.e., memory storage). Computer storage media may include Random Access Memory (RAM), read Only Memory (ROM), electrically erasable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other article of manufacture that can be used to store information and that can be accessed by the computer device 100. In accordance with one aspect, any such computer storage media may be part of computing device 100. Computer storage media does not include a carrier wave or other propagated data signal.
According to one aspect, communication media is embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal (e.g., a carrier wave or other transport mechanism) and includes any information delivery media. According to one aspect, the term "modulated data signal" describes a signal that has one or more characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio Frequency (RF), infrared, and other wireless media.
In some embodiments of the invention, computing device 100 includes one or more processors, and one or more readable storage media storing program instructions. The program instructions, when configured to be executed by one or more processors, cause a computing device to perform a method of processing a message in an embodiment of the invention.
The message processing method of the present invention can be executed in the production side and the consumption side, and the message processing method executed in the production side will be described first.
Fig. 2 shows a flow diagram of a method 200 of processing a message according to one embodiment of the invention. Method 200 may be performed in an operating system of a computing device, such as computing device 100 described above. The operating system may be any operating system, such as: windows, linux, unix, but is not so limited. The computing device executing the method 200 includes a producer side and a consumer side, wherein the producer side may also refer to a producer process, and the consumer side may also refer to a consumer process. Method 200 is suitable for execution in a production side of a computing device, such as computing device 100 described above. As shown in fig. 2, the method 200 begins at step 210.
In step 210, a memory area shared by the producer and the consumer and a semaphore for inter-process communication between the consumer and the producer are initialized.
According to the embodiment of the invention, after the consumption end creates the memory area shared by the consumption end and the production end, the production end and the consumption end initialize the shared memory area. By reading or writing data into the shared memory area, interprocess communication can be realized between the two processes of the consumption end and the production end. By utilizing the memory area, the data (such as messages) is not required to be copied, and only two processes of the consumption end and the production end are required to be mapped into the same physical memory (namely a shared memory area) in the computing equipment, so that the consumption end and the production end can both see the memory area, and when one process of the two processes reads the memory area, the other process can write the memory area.
According to an embodiment of the invention, a semaphore for interprocess communication between a consumer and a producer is initialized. Here, the semaphore for inter-process communication may be used to indicate the number of resources allowed to be accessed, and may be set to 0 when the semaphore for inter-process communication is initialized, indicating that no resources are accessible.
Subsequently, in step 220, the memory cells of the memory region are obtained.
In some embodiments, a start position of the write information of the memory unit in the memory area shared by the producer and the consumer is obtained, so that the message is subsequently written into the memory area from the start position. Optionally, in the memory area, the production end maps the memory area of the target file, so that the production end can directly access the target file corresponding to the address of the memory area of the target file to obtain the message recorded in the target file. Optionally, the production end and the consumption end both perform mapping of the storage area of the target file, so that the production end and the consumption end can directly access the target file, the production end reads the message from the address of the storage area of the target file and writes the message into the memory unit, and then the consumption end acquires the written message from the memory unit and performs corresponding processing on the message.
Subsequently, in step 230, the message is written to the memory unit by multithreading and the semaphore for interprocess communication is updated so that the consuming side processes the message.
In some embodiments, the multithreading writes the message to the memory unit by repeating the obtaining of the current write address of the memory unit from the start position of the write information of the memory unit through an atomic operation. Taking the example of writing multiple messages, when writing the first message, it is written into the initial position, and then the subsequent messages are written into the latest write address currently acquired (when writing the message). Specifically, after reading the message from the target file corresponding to the address of the storage area of the target file, the production end writes the message into the memory unit.
In some embodiments, the semaphore for inter-process communication may be set to 0 during initialization, and then when a message is written to the memory unit, the semaphore for inter-process communication may be updated by increasing the semaphore for inter-process communication by a corresponding amount, for example, by one. When the semaphore of the interprocess communication is a positive value, the resource which is allowed to be accessed exists in the memory area, and the consumption end can determine that the resource which is allowed to be accessed exists in the memory area through the semaphore of the interprocess communication, so that the message can be acquired from the memory area and processed. Specifically, the consuming side may process the message according to the message processing method 300 described later.
Next, a method of processing a message executed in the consumer side will be described.
Fig. 3 shows a flow diagram of a method 300 of processing a message according to another embodiment of the invention. The method 300 may be performed in an operating system of a computing device, such as the computing device 100 described above. The operating system may be any operating system, such as: windows, linux, unix, but is not so limited. The computing device executing the method 300 includes a producer side and a consumer side, wherein the producer side may also be referred to as a producer process and the consumer side may also be referred to as a consumer process. The method 300 is suitable for execution in a consuming side of a computing device, such as the computing device 100 described above. As shown in fig. 3, the method 300 begins at step 310.
In step 310, a memory area shared by the consuming side and the producing side is created, and a semaphore for inter-process communication between the consuming side and the producing side and a semaphore for inter-thread communication between a plurality of message processing threads of the consuming side are initialized.
According to the embodiment of the invention, after the consumption end creates the memory area shared by the consumption end and the production end, the consumption end and the production end initialize the semaphore of interprocess communication together. The semaphore for interprocess communication may be used to indicate the number of resources that are allowed to be accessed, and may be set to 0 when the semaphore for interprocess communication is initialized, indicating that no resources are accessible. In addition, it is also necessary to initialize the semaphore of inter-thread communication between multiple message processing threads of the consuming side, which can be used to control the access of resources, i.e. to control the access of messages in the memory area, and which can indicate the amount of common resources that are not used by the threads. During the initialization of the semaphore for inter-thread communication, the semaphore for inter-thread communication may be set to the amount of resources that are currently available for inter-thread communication, for example: the number of messages currently counted by the thread.
In some embodiments, the size of the largest data structure unit of the computing device is obtained and the size of the memory region is adjusted to be an integer multiple of the size of the largest data structure unit.
Specifically, the communication protocol for interprocess communication between the production side and the consumption side comprises the structural units, and the size of the maximum data structural unit can be obtained by comparing all the structural units involved in the communication protocol. Alternatively, the size of the maximum data structure unit may be the size of a memory page.
In some embodiments, in the memory area, a circular queue structure variable shared by the consumer side and the producer side is created, so as to handle a read-write process of a message through the created circular queue structure variable, for example: the production end is responsible for writing the message into the circular queue structure variable, and the consumption end is responsible for reading the message from the circular queue structure variable. Optionally, in the memory area, the consuming side maps the memory area of the target file, so that the consuming side can directly access the target file corresponding to the address of the memory area of the target file, and record the message through the target file.
In step 320, a plurality of message processing threads are created and initiated by the consumer.
According to the embodiment of the invention, the plurality of message processing threads can comprise a message acquisition thread, a message analysis thread and/or a message cache length listening thread, and a message consumption thread. The message obtaining thread may be configured to obtain a message generated by a production end. The message parsing thread may be configured to parse the message acquired in the message acquisition thread, for example: and splitting the acquired message. The message consumption thread may be used to perform some additional processing on the parsed message, such as: and printing the analyzed message. A message cache length snoop thread may be used to snoop for the presence of messages in the cache.
In step 330, the message written into the memory area by the producer is processed by the plurality of message processing threads according to the incoming parameters, wherein the incoming parameters include the semaphore of the interprocess communication and the semaphore of the inter-thread communication.
The following describes a flow of processing a message by each of a plurality of message processing threads.
FIG. 4 illustrates a flow diagram for a message fetch thread processing a message in accordance with one embodiment of the present invention. As shown in fig. 4, in step 410, the length of the message stored in the memory area is obtained through the message obtaining thread.
Specifically, when a message is sent, the number of messages will be accumulated, and the length of the message will also be accumulated, and when a message is received, the number of messages will be decremented, and the length of the message will also be decremented. Optionally, the length of the message stored in the memory unit of the memory area is obtained.
Subsequently, in step 420, the message stored in the memory area is obtained according to the length of the message.
Specifically, according to the length of the message stored in the acquired memory area, the message of the length in the memory area is acquired, that is, the message stored in the entire memory area is acquired as a whole.
Then, in step 430, the message stored in the memory area is stored in the first cache area shared by the message obtaining thread and the message parsing thread.
In some embodiments, the message obtaining thread also stores the message stored in the memory area into a third cache area shared by the message obtaining thread and the message buffer length listening thread, so that the message buffer length listening thread processes the message stored in the shared third cache area.
Here, the first cache region and the third cache region exist in the computing device and independently exist outside a memory region shared by the production side and the consumption side.
Then, in step 440, the semaphore of the inter-thread communication is updated, so that the message parsing thread splits the message stored in the first buffer according to the updated semaphore of the inter-thread communication.
According to an embodiment of the invention, the semaphore for inter-thread communication is the number of messages currently counted by a thread. And updating the semaphore of the inter-thread communication, namely updating the semaphore of the inter-thread communication to be in a latest state and updating the semaphore of the inter-thread communication to be the number of the messages which are currently counted by the message acquisition thread.
Subsequently, in step 450, when the length of the message stored in the memory area is zero, the memory area is monitored.
According to the embodiment of the invention, when the length of the message stored in the memory area is zero, it can indicate that no message is stored in the memory, and at this time, the memory area is monitored, so that when the message exists in the memory area, the message is continuously processed.
Then, in step 460, in response to the change in the semaphore for interprocess communication, the steps from acquiring the length of the message stored in the memory area are continued.
According to the embodiment of the invention, when the semaphore of the interprocess communication changes, which indicates that the resource in the memory area accessible by the consumer terminal changes, and the accessible message exists in the memory area, the steps in 410-460 are continuously executed.
FIG. 5 illustrates a flow diagram for a message parsing thread to process a message according to one embodiment of the invention. As shown in fig. 5, in step 510, it is determined whether or not the updated semaphore for inter-thread communication is accounted for by the message analysis thread.
Subsequently, in step 520, if the updated semaphore for inter-thread communication accounts for a message, the message stored in the first buffer is split and then stored in a second buffer shared by the message analysis thread and the message consumption thread, the split message stored in the second buffer is counted, and the updated semaphore for inter-thread communication is updated again, so that the message consumption thread performs predetermined processing on the split message stored in the second buffer according to the updated semaphore for inter-thread communication.
The second cache region exists in the computing device and independently exists outside a memory region shared by the production end and the consumption end.
In some embodiments, the message stored in the first cache may be split according to a specific length, for example, the message stored in the first cache may be split into a plurality of messages of specific lengths, so as to split a large message into a plurality of small messages. Of course, the message may be split according to other ways, which is not limited by the present invention.
According to an embodiment of the invention, the semaphore for inter-thread communication is the number of messages that a thread is currently counting. And updating the semaphore of the inter-thread communication, namely updating the semaphore of the inter-thread communication to be in a latest state and updating the semaphore of the inter-thread communication to be the number of the split messages counted by the message analysis thread currently.
If the updated semaphore for inter-thread communication does not account for a message, step 530 is executed to suspend the message resolution thread and release the system resources occupied by the message resolution thread.
FIG. 6 illustrates a flow diagram for a message consuming thread processing a message according to one embodiment of the invention. As shown in fig. 6, in step 610, it is determined whether or not the semaphore for the thread-to-thread communication updated again counts up a message by the message consuming thread.
If the updated semaphore for inter-thread communication counts a message, step 620 is executed to perform predetermined processing on the split message stored in the second buffer.
In some embodiments, performing the predetermined processing on the split messages stored in the second buffer area may be printing the split messages stored in the second buffer area one by one.
If the updated semaphore for inter-thread communication does not account for a message, step 630 is performed, the message consuming thread is suspended, and system resources occupied by the message consuming thread are released.
FIG. 7 illustrates a flow diagram for a message cache length snoop thread processing a message according to one embodiment of the present invention. As shown in fig. 7, in step 710, the length of the message stored in the third buffer is printed by the message buffer length listening thread.
Subsequently, in step 720, the message buffer length listening thread is put to sleep for a predetermined time.
In some embodiments, the predetermined time period may be set as desired, for example: the predetermined time period may be 1 second, but the predetermined time period may be set to other time periods. Here, the purpose of sleeping the message buffer length listening thread for a predetermined time is to facilitate statistics of packet loss that may exist in multi-threaded message passing, and to alleviate the bandwidth problem.
Subsequently, in step 730, after the sleep of the message buffer length monitoring thread is finished, the length of the message stored in the third buffer is monitored.
In some embodiments, the message cache length monitoring thread continuously monitors the messages stored in the third cache region, and when the length of the messages stored in the third cache region changes, the steps 710 to 730 are continuously executed, so as to conveniently count the length of the messages currently stored in the third cache region.
In some embodiments, after step 330 of method 300, multiple message processing threads may also be associated with the consuming side's main thread. Optionally, the aforementioned multiple message processing threads are associated with the main thread of the consuming side in a blocking manner.
Subsequently, in response to the end of the plurality of message processing threads, the mapping of the storage area of the target file can be cancelled, and the storage area shared by the production end and the consumption end can be deleted.
In some embodiments, the number of each thread in the message obtaining thread, the message parsing thread, the message consuming thread, and the message cache length monitoring thread may also be allocated according to a system resource condition of the operating system, so as to increase the processing efficiency of the message and further improve the speed of inter-process communication between the producing end and the consuming end.
The invention also provides a message processing system. Fig. 8 shows a schematic diagram of a system 800 for processing messages according to an embodiment of the invention. As shown in fig. 8, a system 800 for processing messages includes a consumer end 810 and a producer end 820.
Wherein the consuming end 810 is adapted to: creating a memory area shared by a consumption end and a production end, and initializing semaphore for interprocess communication between the consumption end and the production end and semaphore for interprocess communication between a plurality of message processing threads of the consumption end; creating and starting a plurality of message processing threads; and processing the message written into the memory area by the production end through a plurality of message processing threads according to the incoming parameters, wherein the incoming parameters comprise the semaphore of interprocess communication and the semaphore of interprocess communication.
Wherein the production end 820 is adapted to: initializing a memory area shared by a production end and a consumption end and a semaphore for interprocess communication between the consumption end and the production end; acquiring a memory unit of a memory area; messages are written to the memory unit through multithreading and the semaphore for interprocess communication is updated.
It should be noted that, the details of the message processing system 800 provided in this embodiment have been disclosed in detail in the description based on fig. 1 to fig. 7, and are not described herein again.
According to the technical scheme, the memory area shared by the production end and the consumption end is created, so that the production end and the consumption end can access the memory area. The production end writes the message into the memory area, and the consumption end processes the message in the memory area, so that the production end and the consumption end do not need to directly transmit the message, but transmit the message in a mode of accessing the shared memory area, and the inter-process communication speed between the production end and the consumption end is improved. The invention also processes the message through a plurality of message processing threads, thereby improving the processing speed of the message and further improving the interprocess communication speed between the production end and the consumption end.
Furthermore, the invention acquires the message from the memory area through the message acquisition thread, splits the message acquired by the message acquisition thread through the message analysis thread, performs preset processing on the message split by the message analysis thread through the message consumption thread, monitors the length of the message acquired by the message acquisition thread through the message cache length monitoring thread, and completes the processing of the message through the cooperation of the threads, thereby improving the inter-process communication speed between the production end and the consumption end.
In addition, the technical scheme of the invention can be suitable for any operating system and has high flexibility. The number of each thread in a plurality of message processing threads can be allocated, so that the message processing efficiency is improved, and the speed of interprocess communication between the production end and the consumption end is further improved. The invention can realize high-speed communication among processes without complex boundary processing algorithm.
The various techniques described herein may be implemented in connection with hardware or software or, alternatively, with a combination of both. Thus, the methods and apparatus of the present invention, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as removable hard drives, U.S. disks, floppy disks, CD-ROMs, or any other machine-readable storage medium, wherein, when the program is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
In the case of program code execution on programmable computers, the mobile terminal generally includes a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Wherein the memory is configured to store program code; the processor is configured to carry out the inventive message processing method according to instructions in said program code stored in the memory.
By way of example, and not limitation, readable media may comprise readable storage media and communication media. Readable storage media store information such as computer readable instructions, data structures, program modules or other data. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. Combinations of any of the above are also included within the scope of readable media.
In the description provided herein, algorithms and displays are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with examples of this invention. The required structure for constructing such a system is apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects.
Those skilled in the art will appreciate that the modules or units or components of the devices in the examples disclosed herein may be arranged in a device as described in this embodiment or alternatively may be located in one or more devices different from the devices in this example. The modules in the foregoing examples may be combined into one module or may be further divided into multiple sub-modules.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components in the embodiments may be combined into one module or unit or component, and furthermore, may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification, and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except that at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification may be replaced by an alternative feature serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments.
Furthermore, some of the described embodiments are described herein as a method or combination of method elements that can be performed by a processor of a computer system or by other means of performing the described functions. A processor having the necessary instructions for carrying out the method or method elements thus forms a means for carrying out the method or method elements. Further, the elements of the apparatus embodiments described herein are examples of the following apparatus: the apparatus is used to implement the functions performed by the elements for the purpose of carrying out the invention.
As used herein, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this description, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter.

Claims (10)

1. A method of processing a message, performed in a consuming side of a computing device, the method comprising:
creating a memory area shared by the consumption end and the production end, and initializing the semaphore of interprocess communication between the consumption end and the production end and the semaphore of interprocess communication between a plurality of message processing threads of the consumption end;
creating and starting the plurality of message processing threads;
and processing the message written into the memory area by the production end through the plurality of message processing threads according to incoming parameters, wherein the incoming parameters comprise the semaphore of the interprocess communication and the semaphore of the inter-thread communication.
2. The method of claim 1, wherein the plurality of message processing threads comprise a message acquisition thread and a message parsing thread, and processing the message written into the memory area by the producer includes:
acquiring the length of the message stored in the memory area through the message acquisition thread;
acquiring the message stored in the memory area according to the length of the message stored in the memory area;
storing the message into a first cache region shared by the message acquisition thread and the message analysis thread;
updating the semaphore of the inter-thread communication so that the message analysis thread splits the message stored in the first cache region according to the updated semaphore of the inter-thread communication;
when the length of the message stored in the memory area is zero, monitoring the memory area;
and responding to the change of the semaphore of the inter-process communication, and continuing to execute the steps starting from the step of acquiring the length of the message stored in the memory area.
3. The method of claim 2, wherein the plurality of message processing threads further comprises a message consuming thread that processes messages written to the memory region by the producer, further comprising:
judging whether the updated semaphore of the inter-thread communication is counted to the message stored in the memory area or not through the message analysis thread;
if so, splitting the message stored in the first cache region, storing the split message into a second cache region shared by the message analysis thread and the message consumption thread, counting the split message stored in the second cache region, and updating the updated semaphore of inter-thread communication again, so that the message consumption thread performs predetermined processing on the split message stored in the second cache region according to the updated semaphore of inter-thread communication;
otherwise, suspending the message analysis thread and releasing the system resources occupied by the message analysis thread.
4. The method of claim 3, wherein processing the message written into the memory region by the production end further comprises:
judging whether the semaphore of the communication between the threads which is updated again is counted to the message stored in the memory area or not through the message consumption thread;
if yes, performing preset processing on the split message stored in the second cache region;
otherwise, suspending the message consumption thread and releasing the system resources occupied by the message consumption thread.
5. The method of any of claims 2 to 4, wherein the plurality of message processing threads further comprises a message buffer length listening thread that processes messages written into the memory region by the producer, further comprising:
storing the messages stored in the memory area into a third cache area shared by the message acquisition thread and the message cache length monitoring thread through the message acquisition thread;
printing the length of the message stored in the third cache region through the message cache length monitoring thread;
performing sleep processing for a preset time length on the message cache length monitoring thread;
and after the sleep of the message cache length monitoring thread is finished, monitoring the length of the message stored in the third cache region.
6. The method of any of claims 1 to 4, further comprising:
and acquiring the size of the maximum data structure unit of the computing equipment, and adjusting the size of the memory area to be integral multiple of the size of the maximum data structure unit.
7. A method of processing a message, performed in a production side of a computing device, the method comprising:
initializing a memory area shared by the production end and the consumption end and a semaphore for process-to-process communication between the consumption end and the production end;
acquiring a memory unit of the memory area;
writing a message into the memory unit by multithreading and updating the semaphore for interprocess communication for the consuming side to process the message according to the method of any of claims 1 to 6.
8. A system for processing a message, comprising a consumer side and a producer side, wherein:
the consumer-side is adapted to: creating a memory area shared by the consumption end and the production end, and initializing the semaphore of interprocess communication between the consumption end and the production end and the semaphore of interprocess communication between a plurality of message processing threads of the consumption end; creating and starting a plurality of message processing threads; processing the message written into the memory area by the production end through the plurality of message processing threads according to incoming parameters, wherein the incoming parameters comprise the semaphore of the interprocess communication and the semaphore of the inter-thread communication;
the production end is adapted to: initializing a memory area shared by the production end and the consumption end and a semaphore for process-to-process communication between the consumption end and the production end; acquiring a memory unit of the memory area; and writing messages into the memory unit through multithreading, and updating the semaphore of the interprocess communication.
9. A computing device, comprising:
at least one processor; and
a memory storing program instructions, wherein the program instructions are configured to be executed by the at least one processor, the program instructions comprising instructions for performing the method of any of claims 1 to 7.
10. A readable storage medium storing program instructions that, when read and executed by a computing device, cause the computing device to perform the method of any of claims 1 to 7.
CN202310103617.5A 2023-01-30 2023-01-30 Message processing method, system, computing device and readable storage medium Active CN115840654B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310103617.5A CN115840654B (en) 2023-01-30 2023-01-30 Message processing method, system, computing device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310103617.5A CN115840654B (en) 2023-01-30 2023-01-30 Message processing method, system, computing device and readable storage medium

Publications (2)

Publication Number Publication Date
CN115840654A true CN115840654A (en) 2023-03-24
CN115840654B CN115840654B (en) 2023-05-12

Family

ID=85579629

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310103617.5A Active CN115840654B (en) 2023-01-30 2023-01-30 Message processing method, system, computing device and readable storage medium

Country Status (1)

Country Link
CN (1) CN115840654B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117076139A (en) * 2023-10-17 2023-11-17 北京融为科技有限公司 Data processing method and related equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111352743A (en) * 2018-12-24 2020-06-30 北京新媒传信科技有限公司 Process communication method and device
CN111651286A (en) * 2020-05-27 2020-09-11 泰康保险集团股份有限公司 Data communication method, device, computing equipment and storage medium
US20210149680A1 (en) * 2019-11-15 2021-05-20 Intel Corporation Data locality enhancement for graphics processing units
CN113176942A (en) * 2021-04-23 2021-07-27 北京蓝色星云科技发展有限公司 Method and device for sharing cache and electronic equipment
CN113778700A (en) * 2020-10-27 2021-12-10 北京沃东天骏信息技术有限公司 Message processing method, system, medium and computer system
CN114911632A (en) * 2022-07-11 2022-08-16 北京融为科技有限公司 Method and system for controlling inter-process communication

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111352743A (en) * 2018-12-24 2020-06-30 北京新媒传信科技有限公司 Process communication method and device
US20210149680A1 (en) * 2019-11-15 2021-05-20 Intel Corporation Data locality enhancement for graphics processing units
CN111651286A (en) * 2020-05-27 2020-09-11 泰康保险集团股份有限公司 Data communication method, device, computing equipment and storage medium
CN113778700A (en) * 2020-10-27 2021-12-10 北京沃东天骏信息技术有限公司 Message processing method, system, medium and computer system
CN113176942A (en) * 2021-04-23 2021-07-27 北京蓝色星云科技发展有限公司 Method and device for sharing cache and electronic equipment
CN114911632A (en) * 2022-07-11 2022-08-16 北京融为科技有限公司 Method and system for controlling inter-process communication

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
严兵;: "生产者/消费者问题的分析和实现" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117076139A (en) * 2023-10-17 2023-11-17 北京融为科技有限公司 Data processing method and related equipment
CN117076139B (en) * 2023-10-17 2024-04-02 北京融为科技有限公司 Data processing method and related equipment

Also Published As

Publication number Publication date
CN115840654B (en) 2023-05-12

Similar Documents

Publication Publication Date Title
US11036650B2 (en) System, apparatus and method for processing remote direct memory access operations with a device-attached memory
EP2240859B1 (en) A multi-reader, multi-writer lock-free ring buffer
EP3335124B1 (en) Register files for i/o packet compression
US20150186068A1 (en) Command queuing using linked list queues
CN111949568B (en) Message processing method, device and network chip
CN110119304B (en) Interrupt processing method and device and server
US10120731B2 (en) Techniques for controlling use of locks
US12001846B2 (en) Systems, methods, and devices for queue availability monitoring
US10817183B2 (en) Information processing apparatus and information processing system
CN115840654A (en) Message processing method, system, computing device and readable storage medium
CN117377943A (en) Memory-calculation integrated parallel processing system and method
EP4124963A1 (en) System, apparatus and methods for handling consistent memory transactions according to a cxl protocol
US11481250B2 (en) Cooperative workgroup scheduling and context prefetching based on predicted modification of signal values
US20180052659A1 (en) Sending and receiving data between processing units
US20130262812A1 (en) Hardware Managed Allocation and Deallocation Evaluation Circuit
US9965321B2 (en) Error checking in out-of-order task scheduling
CN116561091A (en) Log storage method, device, equipment and readable storage medium
US20150212759A1 (en) Storage device with multiple processing units and data processing method
CN115269199A (en) Data processing method and device, electronic equipment and computer readable storage medium
US11275669B2 (en) Methods and systems for hardware-based statistics management using a general purpose memory
CN114924793A (en) Processing unit, computing device, and instruction processing method
US10891244B2 (en) Method and apparatus for redundant array of independent drives parity quality of service improvements
CN113220608A (en) NVMe command processor and processing method thereof
WO2024007745A1 (en) Data writing method and apparatus, data reading method and apparatus, electronic device, and storage medium
US11907138B2 (en) Multimedia compressed frame aware cache replacement policy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant