CN116208574A - Message processing method, device, electronic equipment and computer readable storage medium - Google Patents
Message processing method, device, electronic equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN116208574A CN116208574A CN202310237832.4A CN202310237832A CN116208574A CN 116208574 A CN116208574 A CN 116208574A CN 202310237832 A CN202310237832 A CN 202310237832A CN 116208574 A CN116208574 A CN 116208574A
- Authority
- CN
- China
- Prior art keywords
- data
- message
- packet
- data packet
- random access
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
- H04L49/9084—Reactions to storage capacity overflow
- H04L49/9089—Reactions to storage capacity overflow replacing packets in a storage arrangement, e.g. pushout
- H04L49/9094—Arrangements for simultaneous transmit and receive, e.g. simultaneous reading/writing from/to the storage element
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/12—Avoiding congestion; Recovering from congestion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
- H04L49/9005—Buffering arrangements using dynamic buffer space allocation
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The embodiment of the invention provides a message processing method, a device, electronic equipment and a computer readable storage medium, which belong to the technical field of communication, when network equipment monitors that external equipment starts to send a data packet of a service message, the network equipment writes the received data packet into a double-rate synchronous dynamic random access memory under the condition that the current received packet is in a congestion state, directly writes message header data in the received data packet into a cache of a central processing unit under the condition that the current received packet is not in the congestion state, and writes other data except the message header data into the double-rate synchronous dynamic random access memory, so that the message header data is already in the cache when the central processing unit processes the service message, the double-rate synchronous dynamic random access memory is not required to be accessed to acquire the message header data, the access pressure of the central processing unit to the double-rate synchronous dynamic random access memory can be effectively reduced, and the overall performance of soft forwarding can be further improved.
Description
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a method and apparatus for processing a message, an electronic device, and a computer readable storage medium.
Background
When the network equipment receives the message, the message information is stored, then the stored message information is identified, and table lookup forwarding is performed according to the identification result.
The traditional message soft forwarding process flow is as follows: the DMA (Direct Memory Access ) controller writes the packet to DDR SDRAM (Double Data Rate Synchronous Dynamic Random Access Memory, double speed synchronous dynamic random access memory) first, after writing the corresponding message, updates the DMA descriptor or updates the DMA descriptor and provides an interrupt at the same time, the CPU (central processing unit ) perceives the message reception by polling the DMA descriptor or the interrupt provided by the DMA, then the CPU obtains the message data address according to the DMA descriptor, reads the message data from DDR, and analyzes and forwards the message data. However, because the delay of reading the DDR by the CPU is longer, the CPU cannot continuously process the received message before retrieving the data, so that the CPU resource is consumed, and the overall performance is greatly reduced.
Disclosure of Invention
Accordingly, an object of the present invention is to provide a method, an apparatus, an electronic device, and a computer readable storage medium for processing a message, which can solve the problems of CPU resource consumption and greatly reduced overall performance caused by the conventional message soft forwarding method.
In order to achieve the above object, the technical scheme adopted by the embodiment of the invention is as follows:
in a first aspect, an embodiment of the present invention provides a method for processing a message, which is applied to a network device, where the network device includes a central processing unit and a double rate synchronous dynamic random access memory, and the method includes:
when the data packet of the service message is monitored to be sent by the external equipment, judging whether the current received packet is in a congestion state or not;
if not, receiving the data packet, extracting the message header data of the service message from the data packet, writing the message header data into a cache of the central processing unit, and writing the data packet except the message header data into the double-rate synchronous dynamic random access memory;
if yes, the data packet is received, and the data packet is written into the double-rate synchronous dynamic random access memory.
Further, the step of receiving the data packet includes:
adopting direct memory access, receiving a data packet of a service message sent by external equipment, and updating a state value of a DMA descriptor from a first state value to a second state value;
the first state value represents that the message data receiving state is not completed, and the second state value represents that the message data receiving state is completed;
Before the step of writing the data packet except the header data into the double rate synchronous dynamic random access memory, or before the step of writing the data packet into the double rate synchronous dynamic random access memory, the method further comprises:
and determining the data address of the storage area for storing the data packet in the double-rate synchronous dynamic random access memory.
Further, the step of determining whether the currently received packet is in a congestion state includes:
determining the total number of descriptors representing the completion state of receiving the message data from all DMA descriptors;
and judging whether the total number of the descriptors exceeds a preset threshold value, if so, judging that the current received packet is in a congestion state, and if not, judging that the current received packet is not in the congestion state.
Further, the step of writing the header data into the cache of the central processing unit includes:
transmitting the message header data and the data address to a bus; the data address is an address of a storage area storing the data packet in the double-rate synchronous dynamic random access memory;
and storing the header data into the allocated specific area in the cache of the central processing unit through the bus according to the data address.
Further, before the step of sending the header data and the data address to the bus, the method includes:
and allocating a free specific area for the message header data from the cache of the central processing unit.
Further, the step of extracting header data of the service packet from the data packet includes:
and extracting data with the length of a preset length from the data packet by taking the initial field of the data packet as a message header starting point.
In a second aspect, an embodiment of the present invention provides a method for processing a packet, which is applied to a network device, where the network device includes a central processor and a double-rate synchronous dynamic random access memory, and the method includes:
polling a DMA descriptor, if the currently polled DMA descriptor is a second state value, acquiring a data address from the DMA descriptor, and determining a service message corresponding to the DMA descriptor; wherein, the second state value representation is in a message data receiving completion state;
identifying whether the message header data of the service message exists in a cache of the central processing unit, if so, analyzing and identifying the message header data in the cache, and looking up a table based on an identification result to determine forwarding information;
Obtaining the residual data of the service message from a storage area corresponding to the data address in the double-rate synchronous dynamic random access memory, and editing the residual data to obtain a message to be forwarded;
and forwarding the message to be forwarded to next-hop equipment based on the forwarding information, setting the DMA descriptor as a first state value, and releasing a data address in the DMA descriptor.
In a third aspect, an embodiment of the present invention provides a packet processing device, which is applied to a network device, where the network device includes a central processor and a double-rate synchronous dynamic random access memory, and the packet processing device includes a congestion judging module and a storage module;
the congestion judging module is used for judging whether the current received packet is in a congestion state or not when the data packet of the service message is monitored to be sent by the external equipment;
the storage module is used for receiving the data packet, extracting the message header data of the service message from the data packet, writing the message header data into the cache of the central processing unit, and writing the data packet except the message header data into the double-rate synchronous dynamic random access memory if not;
And the storage module is also used for receiving the data packet if yes, and writing the data packet into the double-rate synchronous dynamic random access memory.
In a fourth aspect, an embodiment of the present invention provides an electronic device, including a processor and a memory, where the memory stores machine executable instructions executable by the processor, the processor being capable of executing the machine executable instructions to implement the method for processing a message according to the first aspect or the second aspect.
In a fifth aspect, an embodiment of the present invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method for processing a message according to the first or second aspect.
When the network device monitors that the external device starts to send the data packet of the service message, the data packet is written into the double-rate synchronous dynamic random access memory under the condition that the current receiving packet is in a congestion state, the message header data in the data packet is directly written into the cache of the central processing unit under the condition that the current receiving packet is not in the congestion state, and other data except the message header data are written into the double-rate synchronous dynamic random access memory, so that when the central processing unit processes the service message, the message header data is already in the cache, and the double-rate synchronous dynamic random access memory is not required to be accessed to acquire the message header data, the access pressure of the central processing unit to the double-rate synchronous dynamic random access memory can be effectively reduced, and the overall performance of soft forwarding can be further improved.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a block diagram of a message processing system according to an embodiment of the present invention.
Fig. 2 shows one of flow diagrams of a message processing method according to an embodiment of the present invention.
Fig. 3 shows a second flowchart of a message processing method according to an embodiment of the present invention.
Fig. 4 shows a schematic flow chart of a partial sub-step of step S12 in fig. 2 or fig. 3.
Fig. 5 shows one of the flow charts of partial sub-steps of step S14 in fig. 2 or 3.
Fig. 6 is a schematic structural diagram of a network device for processing a message according to an embodiment of the present invention.
Fig. 7 shows a second flow chart of the partial sub-step of step S14 in fig. 2 or fig. 3 shown in fig. 5.
Fig. 8 shows a third flowchart of a message processing method according to an embodiment of the present invention.
Fig. 9 is a schematic block diagram of a message processing apparatus according to an embodiment of the present invention.
Fig. 10 shows a block schematic diagram of an electronic device according to an embodiment of the present invention.
Reference numerals: 100-a message processing system; 110-a network device; 120-an external device; 130-a message processing device; 140, a congestion judging module; 150-a memory module; 160-a polling module; 170-an identification module; 180-editing module; 190-a forwarding module; 200-an electronic device.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention.
It is noted that relational terms such as "first" and "second", and the like, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the soft forwarding scenario of the message, because of the complexity of the service, software is generally used to perform identification and search processing, message editing and message forwarding of the service message.
Because the software processing of the service message is complex (i.e. soft forwarding), the service list item is huge, and the soft forwarding process generally comprises the following procedures: the method comprises the steps of completing line side message receiving and uploading through hardware; storing the message in a memory device, such as a DRAM; software perceives the message to receive; software parses the message stored in the memory device; according to the analysis result, carrying out service table lookup, and carrying out service processing and editing on the message, such as stream learning, statistics, deep analysis, security policy and encryption and decryption; and forwarding the message after the processing and editing are finished according to a forwarding decision according to the table lookup result, and completing the message transmission.
In the current soft forwarding method of service messages, a DMA controller firstly writes data packets of messages into DDR, after the messages are written, the DMA descriptor is updated or the DMA descriptor is updated to provide interrupts at the same time, a CPU senses message receiving by polling the DMA descriptor or the interrupts provided by the DMA, and then the CPU obtains message data addresses according to the DMA descriptor, reads the message data from DDR, and further analyzes and forwards the message data. However, because the delay of reading the DDR by the CPU is longer, the CPU cannot continuously process the received message before retrieving the data, so that the CPU resource is consumed, and the overall performance is greatly reduced.
In another existing method, part of key services are usually offloaded to hardware for execution, and the analysis is performed on the look-up table message. However, the router service specification is large, so that the hardware table entry needs to be stored in SRAM, and therefore the hardware cost is high. In addition, the method can only unload a part of services, a large number of service list items are still stored on the DDR, so that the list item items on the DDR can easily reach M level or even G level, and most of the services stored in the DDR can still only be uploaded to the CPU for processing. Therefore, there is still a problem that the CPU resource is consumed, and the overall performance is greatly reduced.
Based on the above consideration, the embodiment of the invention provides a message processing method, which can solve the problems of CPU resource consumption and greatly reduced overall performance caused by the current message soft forwarding method. Hereinafter, this scheme will be described.
The method for processing a message provided in the embodiment of the present invention may be applied to the message processing system shown in fig. 1, where the message processing system 100 may include a network device 110 and a plurality of external devices 120, where the network device 110 may be communicatively connected to the plurality of external devices 120 through a network, and where the network device 110 may include a central processor and a double rate synchronous dynamic random access memory.
The external device 120 may be configured to send a packet of a service packet to the network device 110.
The network device 110 is configured to, when monitoring that the external device 120 starts sending a packet of a service packet, determine whether a current packet is in a congestion state, if not, receive the packet, extract header data of the service packet from the packet, write the header data into a cache of the central processing unit, and write the packets except the header data into the double rate synchronous dynamic random access memory.
Wherein the network device 110 includes, but is not limited to: switches, gateways, terminal devices, servers, etc. Similarly, the external device 120 may include, but is not limited to: independent servers, server clusters, terminal devices, wearable portable devices, personal computers, notebook computers, mobile terminals, switches, gateways, and the like.
In a possible implementation manner, an embodiment of the present invention provides a message processing method, and referring to fig. 2, the message processing method may include the following steps. In this embodiment, the method for processing a message is applied to the network device 110 in fig. 1 for illustration.
And S12, when the data packet of the service message is monitored to be sent by the external equipment, judging whether the current received packet is in a congestion state or not. If not, step S14 is executed, and if yes, step S16 is executed.
In the present embodiment, the reception of the data packet means that the data packet is received, but is not stored in any memory device. The current packet receiving refers to a series of processing flows of message receiving, message storing and message forwarding.
S14, receiving the data packet, extracting the message header data of the service message from the data packet, writing the message header data into a cache of the central processing unit, and writing the data packet except the message header data into the double-rate synchronous dynamic random access memory.
S16, receiving the data packet, and writing the data packet into the double-rate synchronous dynamic random access memory.
When the network device 110 monitors that any external device 120 starts to send a data packet of a service packet and does not receive and store the data packet, if the current packet receiving service of the network device 110 is not in a congestion state, extracting header data from the data packet, directly writing the header data into a cache of the central processing unit, and writing message body data (in this embodiment, the message body data refers to data except the header data in the data packet) except the header data in the data packet into the double-rate synchronous dynamic random access memory. If the current packet receiving service of the network device 110 is in a congestion state, the data packets are all written into the double rate synchronous dynamic random access memory.
Compared with the traditional message soft forwarding method, the message processing method provided by the embodiment of the invention directly writes the message header data of the received service message into the cache of the central processing unit when the current received packet is not in a congestion state, so that the message header data is already in the cache when the central processing unit processes the service message, and the double-rate synchronous dynamic random access memory is not required to be accessed to acquire the message header data, thereby effectively reducing the access pressure of the central processing unit to the double-rate synchronous dynamic random access memory and further improving the overall performance of soft forwarding.
In order to make soft forwarding of the message more efficient, in one possible implementation, direct memory access and DMA descriptors are introduced, and a certain number of DMA descriptors are preset in the network device 110. DMA descriptors may include, but are not limited to: a state value for characterizing the processing state of the service message and a data address for recording the storage area of the data packet of the service message.
For example, the status values may include a first status value "0" indicating that the status of receiving the message data is incomplete, and a second status value "1" indicating that the status of receiving the message data is complete. When the state value is the first state value "0", the data address in the DMA descriptor is empty. When the state value is the second state value '1', the data address in the DMA descriptor is not null, which indicates that the message data corresponding to the data address is in the state of waiting to be processed, waiting to be edited and waiting to be forwarded.
It should be noted that all DMA descriptors in the network device 110 may have a sequential relationship, and may be in a circular order, and each DMA descriptor may record an identifier or an address of a subsequent descriptor. Taking 3 DMA descriptors with serial numbers of 1,2 and 3 as an example, the DMA descriptor with serial number 1 records the identifier or address of the DMA descriptor with serial number 2, the DMA descriptor with serial number 2 records the identifier or address of the DMA descriptor with serial number 3, and the DMA descriptor with serial number 3 records the identifier or address of the DMA descriptor with serial number 1, so as to form a cyclic sequence.
Further, referring to fig. 3, in steps S14 and S16, the received data packet may be further implemented as: and receiving a data packet of a service message sent by the external equipment by adopting direct memory access, and updating the state value of one DMA descriptor from a first state value to a second state value.
In this embodiment, the first state value represents an incomplete packet data reception state, and the second state value represents a packet data reception completion state. Also, step S14 may include step S141 and step S142, and step S16 may include step S161 and step S162.
S141, adopting direct memory access, receiving a data packet of a service message sent by the external equipment, and updating the state value of one DMA descriptor from a first state value to a second state value.
S142, extracting the header data of the service message from the data packet, writing the header data into the cache of the central processing unit, and writing the data packets except the header data into the double-rate synchronous dynamic random access memory.
S161, adopting direct memory access, receiving a data packet of a service message sent by an external device, and updating a state value of a DMA descriptor from a first state value to a second state value.
S162, writing the data packet into the double rate synchronous dynamic random access memory.
Direct memory access (Direct Memory Access, DMA), i.e., the network device 110 receives packets of service messages sent by the external device 120 via a DMA RING.
It should be understood that the DMA descriptor selected in step S141 and step S161 is a DMA descriptor whose status value is the first status value. The manner of selecting the DMA descriptor may be flexibly selected, for example, a DMA descriptor having a first state value subsequent to a DMA descriptor having a second state value may be a DMA descriptor having a first state value, or may be a DMA descriptor randomly selected from all DMA descriptors having a first state value, which is not particularly limited in this embodiment.
Further, before writing the data packet except the header data into the double rate synchronous dynamic random access memory in the above step S142, and before writing the data packet into the double rate synchronous dynamic random access memory in step S16, the method may further include: in the double rate synchronous dynamic random access memory, the data address of the storage area for storing the data packet is determined.
The network device 110 may determine the free storage areas in the double rate synchronous dynamic random access memory by monitoring or recording, so that the storage areas for storing the whole data packet or the message body data in the data packet are determined from all the free storage areas in the double rate synchronous dynamic random access memory by any storage method such as length matching and sequential storage.
The DMA descriptor in step S141 and step S161 may include a data address of a storage area for storing the data packet or header data in the data packet, and the storage area may be written into the DMA descriptor in real time by hardware when determining the data address of the storage area, or may be written into the DMA descriptor in advance. When the data address is written into the DMA descriptor in advance, when the DMA descriptor is switched from the second state value to the first state value, the CPU or the hardware randomly determines a storage area from all the free storage areas of the double-rate synchronous dynamic random access memory, and then fills the data address of the storage area into the DMA descriptor.
For each service packet sent by the external device 120, a corresponding DMA descriptor records information such as a storage area and a status value of the service packet.
In one possible implementation, to quickly and accurately determine the status of the currently received packet, a Threshold is introduced, and the DMA RING may be pre-configured with a Threshold, i.e., threshold. On this basis, referring to fig. 4, it can be determined whether the currently received packet is in a congestion state by the following steps.
S121, determining the total number of descriptors representing the completion state of receiving the message data from all the DMA descriptors.
S122, judging whether the total number of descriptors exceeds a preset threshold value. If yes, step S123 is executed, and if no, step S124 is executed.
S123, judging that the current received packet is in a congestion state.
S124, judging that the current received packet is not in a congestion state.
From all the DMA descriptors configured on the network device 110, determining the total number of the DMA descriptors with the state value being the second state value as the total number of the descriptors, when the total number of the descriptors is smaller than or equal to a threshold value, the current packet receiving is not in a congestion state (burst state), that is, the access of the central processing unit to the double-rate synchronous dynamic random access memory does not reach the upper access memory limit, otherwise, the current packet receiving is in a congestion state, that is, the access of the central processing unit to the double-rate synchronous dynamic random access memory reaches the upper access memory limit.
For step S14, the manner of extracting the header data from the data packet may be flexibly selected, for example, the specified field may be used as the header data, or a field with a preset length may be used as the header data, which is not specifically limited in this embodiment.
In one possible implementation, extracting header data of the service packet from the data packet may be further implemented as: and extracting data with the length of a preset length from the data packet by taking the initial field of the data packet as the starting point of the message header, and taking the data with the length of the preset length as the message header data.
The data except the header in the data packet is the message body data of the service message.
Further, with reference to fig. 5, for step S142, writing header data into the cache of the central processing unit may be implemented by the following steps.
S1422, the header data and the data address are sent to the bus.
It should be noted that the data address is the address of the storage area storing the data packet in the double-rate synchronous dynamic random access memory.
S1424, storing the header data into the allocated specific area in the cache of the CPU according to the data address via the bus.
After the bus receives the data address and the header data, the header data is stored in the allocated specific area in the cache of the central processing unit according to the data address.
Referring to fig. 6, a central processor may include a plurality of cores, each of which may include a plurality of buffers (also referred to as Cache lines), each of which may store a certain length of data.
For step S1422, for the header data of each service packet, when the length of the header data exceeds the length of one cache line, the DMA controller may divide the header data into multiple segments, so that the DMA controller sends the data address and the header data to the bus through multiple transmissions by sending one segment of data at a time. For example, the DMA controller may send a data address and header data to the bus via an ACE 5-litacp Master interface, which is a typical entity of ARM.
After the bus receives the data address and the header data, the header data is written into a specific area allocated in a Cache (Cache) of the CPU through a CPU interface (for example, a typical entity ACE 5-litacp Slave interface which may be an ARM) in units of Cache lines according to the data address.
Further, referring to fig. 7, step S141 may be further included before step S142.
S1421, allocating a free specific area for the header data from the cache of the central processor.
The manner of allocating the free specific area to the header data may be flexibly set, for example, the specific area may be randomly selected by the central processor from the areas in the cache that do not store data, or when the DMA RING receives the data packet of the service packet, the DMA RING may allocate the specific area from the areas in the cache that do not store data according to the monitoring result of the cache. In the present embodiment, there is no particular limitation.
In one possible implementation, when the DMA RING or the central processing unit determines that the free specific area is allocated to the header data, the bus may be notified of the buffer address of the specific area and the data address of the storage area storing the body data of the message in a pairing relationship, so that when the bus receives the data address and the header data, the bus may determine, according to the data address, the buffer address paired with the data address, and write the header data into the buffer area corresponding to the buffer address in the cache.
In the above message processing method provided by the embodiment of the present invention, whether the current packet is in a congestion state is determined according to the backlog number of the unprocessed DMA descriptors of the DMA RING and the pre-configured threshold value, and further, in the non-congestion state, the corresponding message header data is directly written into the cache of the CPU when the packet is received by the bus command, so that the CPU perceives that the message header data is already in the cache of the CPU when the message is received, and the access DDR is not required to acquire the message header data, thereby effectively reducing the access delay when the CPU processes the message, greatly improving the soft forwarding efficiency and improving the overall forwarding bandwidth and forwarding performance.
In a possible implementation manner, the embodiment of the present invention further provides a message processing method, and referring to fig. 8, the method may include the following steps. In this embodiment, the method for processing a message may be applied to the network device 110 in fig. 1. The network device 110 stores the packet of the service packet sent by the external device 120 by using the packet processing method provided in the above embodiment.
S21, polling the DMA descriptor, if the currently polled DMA descriptor is a second state value, acquiring a data address from the DMA descriptor, and determining a service message corresponding to the DMA descriptor.
Wherein the second state value representation is in a message data reception completion state. After the data address is acquired, the service message corresponding to the DMA descriptor can be determined.
S23, identifying whether the message header data of the service message exists in the cache of the central processing unit, if so, analyzing and identifying the message header data in the cache, and looking up a table based on an identification result to determine forwarding information.
S25, obtaining the residual data of the service message from the storage area corresponding to the data address in the double-rate synchronous dynamic random access memory, and editing the residual data to obtain the message to be forwarded.
It should be noted that, the remaining data refers to the message body data except the header data in the data packet of the service message. Step S25 and step S23 may be performed simultaneously or sequentially.
And S27, forwarding the message to be forwarded to the next hop device based on the forwarding information, setting the DMA descriptor to a first state value, and releasing the data address in the DMA descriptor.
In this embodiment, the first state value indicates that the status of the unfinished packet data reception is in.
For step S23, if not, the data packet of the service packet is obtained from the storage area corresponding to the data address in the double rate synchronous dynamic random access memory (DDR), and the data packet is parsed, identified, checked and edited to obtain the message to be forwarded and the forwarding information, and then the message to be forwarded is forwarded to the next hop device based on the forwarding information.
Through the steps S21-S27, when the state value of the DMA descriptor currently polled indicates that the receiving of the message data is completed, and the header data of the message described by the DMA descriptor is in the cache of the CPU, the network device 110 may directly process the header data in the cache to obtain forwarding information, and further edit the remaining data to obtain the message to be forwarded based on the forwarding information, so that only the remaining data needs to be obtained when the network device 110 accesses the DDR in the message forwarding process, and the access pressure to the DDR can be effectively reduced, thereby greatly improving the soft forwarding efficiency, the overall forwarding bandwidth and the forwarding performance of the network device 110.
Based on the same inventive concept as the above-mentioned message processing method, in a possible implementation manner, the embodiment of the present invention further provides a message processing apparatus 130, which may be applied to the network device 110 in fig. 1. Referring to fig. 9, the message processing apparatus 130 may include a congestion judging module 140 and a storage module 150.
The congestion judging module 140 is configured to judge whether the current packet is in a congestion state when it is monitored that the external device starts sending the data packet of the service packet.
The storage module 150 is configured to receive a data packet when the current packet is not in a congestion state, extract header data of a service packet from the data packet, write the header data into a cache of the central processing unit, and write the data packet except for the header data into the double-rate synchronous dynamic random access memory.
The storage module 150 is further configured to receive a data packet and write the data packet into the double rate synchronous dynamic random access memory when the current received packet is in a congestion state.
Further, the message processing apparatus 130 may further include a polling module 160, an identification module 170, an editing module 180, and a forwarding module 190.
The polling module 160 is configured to poll the DMA descriptor, and if the currently polled DMA descriptor is the second status value, acquire the data address from the DMA descriptor, and determine the service message corresponding to the DMA descriptor. Wherein the second state value representation is in a message data reception completion state.
The identifying module 170 is configured to identify whether header data of a service packet exists in a cache of the central processing unit, if yes, analyze and identify the header data in the cache, and look up a table based on an identification result to determine forwarding information.
And the editing module 180 is configured to obtain remaining data of the service packet from a storage area corresponding to the data address in the double-rate synchronous dynamic random access memory, and edit the remaining data to obtain a packet to be forwarded.
And the forwarding module 190 is configured to forward the message to be forwarded to the next hop device based on the forwarding information, set the DMA descriptor to a first state value, and release the data address in the DMA descriptor. The first state value represents that the unfinished message data receiving state is in.
In the above-mentioned packet processing device 130, through the synergistic effect of the congestion judging module 140, the storage module 150, the polling module 160, the identifying module 170, the editing module 180 and the forwarding module 190, when the current packet is not in a congestion state, the header data of the received service packet is directly written into the cache of the central processing unit, so that when the central processing unit processes the service packet, the header data is already in the cache, and the access to the double-rate synchronous dynamic random access memory is not needed to obtain the header data, thereby effectively reducing the access pressure of the central processing unit to the double-rate synchronous dynamic random access memory, and further improving the overall performance of soft forwarding.
For specific limitations of the message processing apparatus 130, reference may be made to the above limitations of the message processing method, and no further description is given here. The modules in the message processing apparatus 130 may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or independent of a processor in the electronic device, or may be stored in software in a memory of the electronic device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, an electronic device 200 is provided, the electronic device 200 may be a terminal, and an internal structure diagram thereof may be as shown in fig. 10. The electronic device 200 comprises a processor, a memory, a communication interface, a display screen and an input means connected by a system bus. Wherein the processor of the electronic device 200 is used to provide computing and control capabilities. The memory of the electronic device 200 includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the electronic device 200 is used for performing wired or wireless communication with an external terminal, where the wireless communication may be implemented through WIFI, an operator network, near Field Communication (NFC), or other technologies. The computer program, when executed by a processor, implements the message processing method provided in the above embodiment.
The structure shown in fig. 10 is merely a block diagram of a portion of the structure related to the present invention and does not constitute a limitation of the electronic device 200 to which the present invention is applied, and a specific electronic device 200 may include more or less components than those shown in fig. 10, or may combine some components, or have a different arrangement of components.
In one embodiment, the message processing apparatus 130 provided in the present invention may be implemented as a computer program, which may be executed on the electronic device 200 as shown in fig. 10. The memory of the electronic device 200 may store various program modules constituting the packet processing apparatus 130, such as the congestion judging module 140, the storage module 150, the polling module 160, the identification module 170, the editing module 180, and the forwarding module 190 shown in fig. 9. The computer program of each program module causes a processor to execute the steps of the message processing method described in the present specification.
For example, the electronic device 200 shown in fig. 10 may execute step S12 through the congestion judging module 140 in the message processing apparatus 130 shown in fig. 8. The electronic device 200 may perform steps S14 and S16 through the storage module 150. The electronic device 200 may perform step S21 through the polling module 160. The electronic device 200 may perform step S23 through the identification module 170. The electronic device 200 may perform step S25 through the editing module 180. The electronic device 200 may perform step S27 through the forwarding module 190.
In one embodiment, an electronic device 200 is provided that includes a memory storing machine executable instructions and a graphics processor that when executing the machine executable instructions performs the steps of: when the data packet of the service message is monitored to be sent by the external equipment, judging whether the current received packet is in a congestion state or not; if not, receiving the data packet, extracting the message header data of the service message from the data packet, writing the message header data into a cache of the central processing unit, and writing the data packet except the message header data into the double-rate synchronous dynamic random access memory; if yes, the data packet is received and written into the double-rate synchronous dynamic random access memory.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a graphics processor, performs the steps of: when detecting that the external equipment starts to send the data packet of the service message, judging whether the current received packet is in a congestion state or not; if not, receiving the data packet, extracting the message header data of the service message from the data packet, writing the message header data into a cache of the central processing unit, and writing the data packet except the message header data into the double-rate synchronous dynamic random access memory; if yes, the data packet is written into the double-rate synchronous dynamic random access memory.
In one embodiment, an electronic device 200 is provided that includes a memory storing machine executable instructions and a graphics processor that when executing the machine executable instructions performs the steps of: polling the DMA descriptor, if the currently polled DMA descriptor is a second state value, acquiring a data address from the DMA descriptor, and determining a service message corresponding to the DMA descriptor; identifying whether message header data of the service message exists in a cache of the central processing unit, if so, analyzing and identifying the message header data in the cache, and looking up a table based on an identification result to determine forwarding information; obtaining the residual data of the service message from the storage area corresponding to the data address in the double-rate synchronous dynamic random access memory, and editing the residual data to obtain the message to be forwarded; and forwarding the message to be forwarded to the next hop device based on the forwarding information, setting the DMA descriptor to a first state value, and releasing the data address in the DMA descriptor.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a graphics processor, performs the steps of: polling the DMA descriptor, if the currently polled DMA descriptor is a second state value, acquiring a data address from the DMA descriptor, and determining a service message corresponding to the DMA descriptor; identifying whether message header data of the service message exists in a cache of the central processing unit, if so, analyzing and identifying the message header data in the cache, and looking up a table based on an identification result to determine forwarding information; obtaining the residual data of the service message from the storage area corresponding to the data address in the double-rate synchronous dynamic random access memory, and editing the residual data to obtain the message to be forwarded; and forwarding the message to be forwarded to the next hop device based on the forwarding information, setting the DMA descriptor to a first state value, and releasing the data address in the DMA descriptor.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other manners as well. The apparatus embodiments described above are merely illustrative, for example, of the flowcharts and block diagrams in the figures that illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present invention may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, or a network device 110, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. The message processing method is characterized by being applied to network equipment, wherein the network equipment comprises a central processing unit and a double-rate synchronous dynamic random access memory, and the method comprises the following steps:
when the data packet of the service message is monitored to be sent by the external equipment, judging whether the current received packet is in a congestion state or not;
if not, receiving the data packet, extracting the message header data of the service message from the data packet, writing the message header data into a cache of the central processing unit, and writing the data packet except the message header data into the double-rate synchronous dynamic random access memory;
if yes, the data packet is received, and the data packet is written into the double-rate synchronous dynamic random access memory.
2. The method of claim 1, wherein the step of receiving the data packet comprises:
adopting direct memory access, receiving a data packet of a service message sent by external equipment, and updating a state value of a DMA descriptor from a first state value to a second state value;
the first state value represents that the message data receiving state is not completed, and the second state value represents that the message data receiving state is completed;
Before the step of writing the data packet except the header data into the double rate synchronous dynamic random access memory, or before the step of writing the data packet into the double rate synchronous dynamic random access memory, the method further comprises:
and determining the data address of the storage area for storing the data packet in the double-rate synchronous dynamic random access memory.
3. The method for processing a packet according to claim 1 or 2, wherein the step of determining whether the currently received packet is in a congestion state comprises:
determining the total number of descriptors representing the completion state of receiving the message data from all DMA descriptors;
and judging whether the total number of the descriptors exceeds a preset threshold value, if so, judging that the current received packet is in a congestion state, and if not, judging that the current received packet is not in the congestion state.
4. The method according to claim 1 or 2, wherein the step of writing the header data into the cache of the central processing unit comprises:
transmitting the message header data and the data address to a bus; the data address is an address of a storage area storing the data packet in the double-rate synchronous dynamic random access memory;
And storing the header data into the allocated specific area in the cache of the central processing unit through the bus according to the data address.
5. The message processing method according to claim 4, wherein before the step of sending the header data and the data address to the bus, the method comprises:
and allocating a free specific area for the message header data from the cache of the central processing unit.
6. The method for processing a packet according to claim 1 or 2, wherein the step of extracting header data of the service packet from the data packet includes:
and extracting data with the length of a preset length from the data packet by taking the initial field of the data packet as a message header starting point.
7. A method for processing a message, the method being applied to a network device, the network device including a central processing unit and a double rate synchronous dynamic random access memory, the method comprising:
polling a DMA descriptor, if the currently polled DMA descriptor is a second state value, acquiring a data address from the DMA descriptor, and determining a service message corresponding to the DMA descriptor; wherein, the second state value representation is in a message data receiving completion state;
Identifying whether the message header data of the service message exists in a cache of the central processing unit, if so, analyzing and identifying the message header data in the cache, and looking up a table based on an identification result to determine forwarding information;
obtaining the residual data of the service message from a storage area corresponding to the data address in the double-rate synchronous dynamic random access memory, and editing the residual data to obtain a message to be forwarded;
and forwarding the message to be forwarded to next-hop equipment based on the forwarding information, setting the DMA descriptor as a first state value, and releasing a data address in the DMA descriptor.
8. The message processing device is characterized by being applied to network equipment, wherein the network equipment comprises a central processing unit and a double-rate synchronous dynamic random access memory, and the message processing device comprises a congestion judging module and a storage module;
the congestion judging module is used for judging whether the current received packet is in a congestion state or not when the data packet of the service message is monitored to be sent by the external equipment;
the storage module is used for receiving the data packet, extracting the message header data of the service message from the data packet, writing the message header data into the cache of the central processing unit, and writing the data packet except the message header data into the double-rate synchronous dynamic random access memory if not;
And the storage module is also used for receiving the data packet if yes, and writing the data packet into the double-rate synchronous dynamic random access memory.
9. An electronic device comprising a processor and a memory, the memory storing machine executable instructions executable by the processor to implement the message processing method of any of claims 1 to 7.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the message processing method according to any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310237832.4A CN116208574A (en) | 2023-03-13 | 2023-03-13 | Message processing method, device, electronic equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310237832.4A CN116208574A (en) | 2023-03-13 | 2023-03-13 | Message processing method, device, electronic equipment and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116208574A true CN116208574A (en) | 2023-06-02 |
Family
ID=86511090
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310237832.4A Pending CN116208574A (en) | 2023-03-13 | 2023-03-13 | Message processing method, device, electronic equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116208574A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117076346A (en) * | 2023-07-24 | 2023-11-17 | 龙芯中科(成都)技术有限公司 | Application program data processing method and device and electronic equipment |
-
2023
- 2023-03-13 CN CN202310237832.4A patent/CN116208574A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117076346A (en) * | 2023-07-24 | 2023-11-17 | 龙芯中科(成都)技术有限公司 | Application program data processing method and device and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107426113B (en) | Message receiving method and network equipment | |
CN114338548B (en) | Message distribution method, device, network equipment and computer readable storage medium | |
CN113138802B (en) | Command distribution device, method, chip, computer device and storage medium | |
US11139999B2 (en) | Method and apparatus for processing signals from messages on at least two data buses, particularly CAN buses; preferably in a vehicle; and system | |
CN116208574A (en) | Message processing method, device, electronic equipment and computer readable storage medium | |
CN114885045B (en) | Method and device for saving DMA channel resources in high-speed intelligent network card/DPU | |
CN113114707B (en) | Rule filtering method for power chip Ethernet controller | |
CN104202212A (en) | System and method for obtaining distributed cluster system alarm | |
CN114500633A (en) | Data forwarding method, related device, program product and data transmission system | |
CN116233018A (en) | Message processing method and device, electronic equipment and storage medium | |
CN109614345B (en) | Memory management method and device for communication between protocol layers | |
CN113301123A (en) | Data stream processing method, device and storage medium | |
WO2021237431A1 (en) | Data processing method and apparatus, processing device, and data storage system | |
CN106453663B (en) | Improved storage expansion method and device based on cloud service | |
CN114615355B (en) | Message processing method and message analysis module | |
CN107592361B (en) | Data transmission method, device and equipment based on dual IB network | |
CN115278395A (en) | Network switching equipment, data stream processing control method and related equipment | |
CN117499351A (en) | Message forwarding device and method, communication chip and network equipment | |
CN114448858B (en) | Message broadcasting method, device, network equipment and storage medium | |
JP2007221522A (en) | Polling device, terminal device, polling method and program | |
JPWO2014087654A1 (en) | Data transmission apparatus, data transmission method, and recording medium | |
CN114125078A (en) | MAC address learning method and device | |
US10862814B2 (en) | Exception handling in a multi-user wireless communication device based on user tag values | |
CN112769701A (en) | Method and device for forwarding message | |
CN111240867A (en) | Information communication system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |