Nothing Special   »   [go: up one dir, main page]

CN115225578A - Message processing method, device, switch, system and medium - Google Patents

Message processing method, device, switch, system and medium Download PDF

Info

Publication number
CN115225578A
CN115225578A CN202210877676.3A CN202210877676A CN115225578A CN 115225578 A CN115225578 A CN 115225578A CN 202210877676 A CN202210877676 A CN 202210877676A CN 115225578 A CN115225578 A CN 115225578A
Authority
CN
China
Prior art keywords
message
target
processed
flow
data packet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210877676.3A
Other languages
Chinese (zh)
Inventor
王倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Armyfly Technology Co Ltd
Original Assignee
Beijing Armyfly Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Armyfly Technology Co Ltd filed Critical Beijing Armyfly Technology Co Ltd
Priority to CN202210877676.3A priority Critical patent/CN115225578A/en
Publication of CN115225578A publication Critical patent/CN115225578A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2441Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the invention discloses a message processing method, a message processing device, a message processing switch, a message processing system and a message processing medium. The method comprises the following steps: shunting the message flow to be processed according to the message characteristic information to obtain at least one target message flow; for each target message flow, reporting a data packet input message of a target message to be processed based on the target message flow to a controller for processing to obtain a corresponding processing result; and aiming at each target message flow, processing each message to be processed in the target message flow according to the corresponding processing result. The method can be convenient for classifying each target message flow after the distribution by distributing the message flow to be processed; and only the data packet input message based on the target message to be processed is reported to the controller for processing, so that the repeated reporting of the data packet input message can be reduced for the same message flow, the impact of a large number of data packet input messages on the controller is effectively reduced, and the accuracy and the efficiency of message processing are improved.

Description

Message processing method, device, switch, system and medium
Technical Field
The embodiment of the invention relates to the technical field of software defined networks, in particular to a message processing method, a message processing device, a message processing system, a message processing device, a message processing switch, a message processing system and a message processing medium.
Background
Software Defined Networking (SDN) separates the forwarding and control planes of a Network, moving the entire control plane into separate controllers. Various routing protocols can be run on the controller, and the calculated flow table entry is issued to the corresponding forwarding device (such as an OpenFlow switch) according to the requirement. And the forwarding equipment processes the corresponding message according to the received flow table entry.
At present, after a forwarding device receives a Packet, if a table entry matching the Packet is not found in a flow table, the Packet may be forwarded to a controller through a corresponding Packet-in message, and the controller decides a forwarding path of the Packet according to the Packet-in message and issues a corresponding flow table entry.
However, when the forwarding device receives a large number of messages and the large number of messages do not match the upstream table entry, a large number of Packet-in messages are generated, and the controller is impacted by the large number of Packet-in messages, so that the controller is busy and cannot normally provide decision control, and accuracy and efficiency of message processing are affected.
Disclosure of Invention
The embodiment of the invention provides a message processing method, a message processing device, a message processing switch, a message processing system and a message processing medium, and aims to improve the accuracy and the efficiency of message processing.
According to an aspect of the embodiments of the present invention, a method for processing a packet is provided, including:
shunting a message flow to be processed according to message characteristic information to obtain at least one target message flow, wherein the target message flow is a message flow consisting of a plurality of messages to be processed with the same message characteristic information, and the target message flow corresponds to one message characteristic information;
for each target message flow, reporting a data packet input message of a target message to be processed based on the target message flow to a controller for processing to obtain a corresponding processing result, wherein the target message to be processed is one message to be processed in the target message flow;
and aiming at each target message flow, processing each message to be processed in the target message flow according to the corresponding processing result.
According to another aspect of the embodiments of the present invention, there is provided a message processing apparatus, including:
the distribution module is used for distributing the message flows to be processed according to the message characteristic information to obtain at least one target message flow, wherein the target message flow is a message flow consisting of a plurality of messages to be processed with the same message characteristic information, and the target message flow corresponds to one message characteristic information;
a reporting module, configured to report, for each target packet flow, a packet input message of a target to-be-processed packet based on the target packet flow to a controller for processing, so as to obtain a corresponding processing result, where the target to-be-processed packet is a to-be-processed packet in the target packet flow;
and the processing module is used for processing each message to be processed in each target message flow according to the corresponding processing result.
According to another aspect of the embodiments of the present invention, there is provided a switch, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform the message processing method according to any of the embodiments of the invention.
According to another aspect of the embodiments of the present invention, a computer-readable storage medium is provided, where computer instructions are stored, and the computer instructions are configured to enable a processor to implement the message processing method according to any embodiment of the present invention when the computer instructions are executed.
According to the technical scheme of the embodiment of the invention, firstly, a message flow to be processed is shunted according to message characteristic information to obtain at least one target message flow, wherein the target message flow is a message flow composed of a plurality of messages to be processed with the same message characteristic information, and the target message flow corresponds to one message characteristic information; then, for each target message flow, reporting a data packet input message of a target message to be processed based on the target message flow to a controller for processing to obtain a corresponding processing result, wherein the target message to be processed is one message to be processed in the target message flow; and finally, processing each message to be processed in the target message flow according to the corresponding processing result aiming at each target message flow. By shunting the message streams to be processed, the technical scheme can be convenient for classifying and processing each shunted target message stream; and only reporting the data packet input message based on the target message to be processed to the controller for processing, so that the repeated reporting of the data packet input message can be reduced for the same message stream, and the impact of a large amount of data packet input messages on the controller is effectively reduced, thereby improving the accuracy and efficiency of message processing.
It should be understood that the statements in this section are not intended to identify key or critical features of the embodiments of the present invention, nor are they intended to limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram illustrating an implementation of a packet forwarding process according to an embodiment of the present invention;
fig. 2 is a flowchart of a message processing method according to an embodiment of the present invention;
fig. 3 is a flowchart of a message processing method according to a second embodiment of the present invention;
fig. 4 is a schematic diagram illustrating an implementation of a message processing method according to a second embodiment of the present invention;
fig. 5 is a schematic diagram illustrating another implementation of a message processing method according to a second embodiment of the present invention;
fig. 6 is a schematic structural diagram of a message processing apparatus according to a third embodiment of the present invention;
fig. 7 is a schematic structural diagram of a switch according to a fourth embodiment of the present invention;
fig. 8 is a schematic structural diagram of a message processing system according to a fifth embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solutions of the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," "object," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Moreover, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
For a better understanding of embodiments of the present invention, the following description is made with respect to terms.
An OpenFlow switch: refers to a switch device based on the OpenFlow protocol. A switch (also called a network switch) is network hardware that receives and forwards data to a target device through message switching.
Flow: data having a certain common characteristic or attribute passing through the same network at the same time is abstracted into one stream. For example, data accessing the same address may be considered a stream. Different control strategies may be implemented for different flows to handle the corresponding processing.
A flow table: the method is a set of strategy table items aiming at a specific flow and is responsible for searching and forwarding a data packet. The corresponding message can be matched and processed through the flow table.
Table entry: also referred to as flow table entries, or flow table entries. A flow table may include multiple entries. An entry may consist of Fields such as Match Fields (Match Fields), priority (Priority), processing Instructions (Instructions), and statistics (e.g., counters).
A set of flow entry action Instructions (Instructions & Actions) defines the processing that the packet matching the flow entry needs to perform. When the packet matches a flow entry, the instruction set included in each flow entry is executed. (action table for indicating how a switch should handle a matching packet after receiving it)
Message: a message (message) is a data unit exchanged and transmitted in the network, i.e. a data block to be sent by a station at a time. The message contains complete data information to be sent, and the message is very inconsistent in length, unlimited in length and variable.
Packet-in message: the function is to send a packet arriving at the OpenFlow switch to the controller. That is, the related data information to be forwarded to the controller is encapsulated into a Packet-in message and sent to the controller.
Packet-out message: and the function of the Packet-in message is to send a relevant data processing result obtained by analyzing and processing the received Packet-in message to the OpenFlow switch by the controller, wherein the Packet-out message is a message containing a Packet sending command.
SDN separates the forwarding and control planes of the network, moving the entire control plane to a separate controller. Various routing protocols can be run on the Controller (Controller), and the calculated flow table entry is issued to the corresponding forwarding device according to the requirement. The issuing of the table entry of the flow table may be active or passive. Specifically, in the active mode, the controller actively issues the flow table entries collected by the controller to the forwarding device, and then the forwarding device can directly perform table lookup and forwarding on the message according to the flow table after receiving the message; in the passive mode, after the forwarding device receives the message, if no matching table entry is found in the flow table, the forwarding device forwards the message to the controller, and the controller decides a forwarding path of the message and issues a corresponding flow table entry. The passive mode has the advantages that the forwarding device does not need to maintain all flow table entries, corresponding flow table entries are obtained from the controller and stored only when the actual data flow is generated, and the corresponding flow table entries can be deleted after the aging timer is overtime.
In the invention, in the passive mode, the message matched with the upstream Table entry or the message forwarded to the reserved port corresponding to the Controller after being not matched with the upstream Table entry (namely, table Miss) is sent to the Controller through the Packet-in message, the Packet-in message can carry the whole message (namely the whole message) needing to transfer the control right, and the message with the limited maximum length and the corresponding Buffer identification (Identity, ID) can also be carried by setting a Buffer (Buffer) of the message in the OpenFlow switch to be transmitted to the Controller. After receiving the Packet-in message, the Controller processes the message carried by the Packet-in message, or a message header (i.e., a message with a limited maximum length) and a Buffer ID carried by the Packet-in message to obtain a processing result, and sends back a corresponding Packet-out message (i.e., a processing result) and a flow table to notify the OpenFlow switch of how to process the message and a subsequent message (the subsequent message may be understood as another message stored in the Buffer and having the same message header as the message).
When a large amount of large-traffic services (i.e. a large amount of messages) are not matched with an upstream Table entry (Table Miss) and need to be forwarded to the Controller, a large amount of Packet-in messages may impact the Controller, causing the Controller to be busy and unable to normally provide decision control. The current mode is that the Controller configures a Table Miss, that is, matches a lowest priority flow Table of all flows (that is, a preset default flow Table, when a message does not match a corresponding flow Table item In the Controller, the default flow Table may be used at this time) and forwards the message to the Openflow switch, where the flow Table Instruction may include which port and Meter to forward to the Controller (that is, to limit the speed of the message matched to the flow Table item), and the Meter limits the speed to reduce the impact of a Packet-In message generated by the Table Miss on the Controller. When the message triggers the Table Miss and then is processed by the Openflow switch, the message is limited in speed and sent to the Controller by Packet-in information. However, the above method may cause that the same flow is sent to the Controller and other flows (i.e. other flows) are limited in speed due to the fact that services cannot be distinguished (for example, when the speed limit sends 100 messages each time, 100 messages of the same flow are sent to the Controller and messages of other flows are limited in speed and wait for the next sending), which causes that other flows cannot be forwarded to the Controller for a long time and the flow is slow to enter normal forwarding.
Fig. 1 is a schematic diagram illustrating implementation of packet forwarding processing according to an embodiment of the present invention. As shown in fig. 1, 1 denotes a controller; and 2 denotes an OpenFlow switch. OpenFlow switches (i.e., openFlow switches) are in communication connection with each other, and each OpenFlow Switch is in communication connection with a Controller through a management connection line. When the OpenFlow Switch receives a data Packet (i.e., a Packet) and does not match an upstream table entry, the Packet may be encapsulated as a Packet-in message based on the Packet and sent to the Controller; the Controller analyzes and processes the received Packet-in message to obtain a corresponding Packet-out message and a flow table item, and sends the Packet-out message and the flow table item back to the corresponding OpenFlow Switch; and the OpenFlow Switch performs corresponding forwarding processing on the Packet according to the Packet-out message and the flow table entry.
Example one
Fig. 2 is a flowchart of a message processing method according to an embodiment of the present invention, where the method is applicable to a case of processing a message, and the method may be executed by a message processing apparatus, where the message processing apparatus may be implemented in a form of hardware and/or software, and the message processing apparatus may be configured in a switch, and in this embodiment, the switch may be an Openflow switch. As shown in fig. 2, the method includes:
s110, distributing the message flows to be processed according to the message characteristic information to obtain at least one target message flow, wherein the target message flow is a message flow composed of a plurality of messages to be processed with the same message characteristic information, and the target message flow corresponds to one message characteristic information.
In this embodiment, the message feature information may be understood as information included in the message, which can be used to characterize the message feature or attribute. For example, the message characteristic information may include, but is not limited to: a source Internet Protocol (IP) address, a source port, a destination IP address, a destination port and transport layer Protocol, a source Media Access Control (MAC) address, a destination MAC address, and a transport layer Protocol number.
A pending message flow is understood to be a pending message flow consisting of a plurality of pending messages. The message to be processed may be understood as a message to be processed, and in the application scenario of this step, the message to be processed may be regarded as a message to be processed that is not matched to a corresponding flow entry. Splitting can be understood as splitting a message stream to be processed into a plurality of different message streams based on message characteristic information; each message flow can correspond to one message characteristic information, and each message flow can comprise a plurality of messages to be processed; for each packet flow, each packet to be processed in the packet flow may correspond to the same packet feature information (i.e., the packet feature information corresponding to the packet flow). A target message flow is understood to be a message flow consisting of a plurality of messages to be processed with the same message characteristic information.
Specifically, the message stream to be processed may be split according to the message characteristic information, and the message to be processed having the same message characteristic information may be split into one stream, so as to obtain one or more target message streams. Each target message flow may correspond to a message characteristic information. For each target packet flow, the target packet flow may include multiple packets to be processed, and each packet to be processed in the target packet flow may correspond to the same packet feature information (i.e., the packet feature information corresponding to the target packet flow).
S120, for each target message flow, reporting a data packet input message of a target message to be processed based on the target message flow to a controller for processing to obtain a corresponding processing result, wherein the target message to be processed is one message to be processed in the target message flow.
In this embodiment, for each target packet flow, a target packet to be processed may be understood as a packet to be processed in the target packet flow; that is, the target pending packet may be any pending packet in the target packet stream. The target message to be processed is not specifically limited, and can be flexibly set according to actual requirements; for example, the target packet to be processed may be a flow header packet of the target packet flow, and the flow header packet may be understood as a first packet to be processed in the target packet flow; or, the target message to be processed may be a flow end packet of the target message flow, and the flow end packet may be understood as the last message to be processed in the target message flow; or, the target message to be processed may be a message to be processed with the smallest message size in the target message stream.
The Packet input message may be considered a Packet in message. The data packet input message of the target to-be-processed message based on the target message flow can be understood as a data packet input message generated based on the target to-be-processed message of the target message flow or related information (such as message characteristic information) contained in the target to-be-processed message of the target message flow; that is, the target message to be processed of the target message flow or the related information (such as message characteristic information) contained in the target message to be processed of the target message flow may be encapsulated in a corresponding data packet input message, so as to wait for being sent to the controller in the form of the data packet input message for corresponding processing.
A controller may be understood as a device communicatively coupled to a switch for processing incoming packets transmitted by the switch. The processing result can be understood as a result obtained by the controller performing corresponding analysis processing according to the received data packet input message; there is no particular limitation on how the controller processes the received packet input message to obtain the processing result. For example, the processing result may be a Packet output message (i.e., a Packet _ out message) corresponding to the Packet input message, and the processing result may include information such as an instruction on how to process the Packet to be processed in the target Packet flow.
Specifically, for each target message flow, the switch may report and transmit a data packet input message of a flow header packet based on the target message flow to a corresponding controller for processing, the controller performs corresponding parsing processing according to the received data packet input message to obtain a corresponding processing result, and sends the processing result back to the corresponding switch, and the switch receives the processing result.
S130, processing each message to be processed in each target message flow according to the corresponding processing result.
In this embodiment, for each target packet flow, each to-be-processed packet (including a target to-be-processed packet) in the target packet flow is processed according to a processing result corresponding to the target packet flow. The switch does not specifically limit how to process each message to be processed in the target message flow according to the processing result corresponding to the target message flow; for example, each message to be processed in the target message stream may be processed according to information such as an instruction set in the processing result.
The embodiment of the invention provides a message processing method, which comprises the steps of firstly shunting a message flow to be processed according to message characteristic information to obtain at least one target message flow, wherein the target message flow is a message flow consisting of a plurality of messages to be processed with the same message characteristic information, and the target message flow corresponds to one message characteristic information; then, for each target message flow, reporting a data packet input message of a target message to be processed based on the target message flow to a controller for processing to obtain a corresponding processing result, wherein the target message to be processed is one message to be processed in the target message flow; finally, aiming at each target message flow, each message to be processed in the target message flow is processed according to the corresponding processing result. The method can be convenient for classifying each target message flow after the distribution by distributing the message flow to be processed; and only the data packet input message based on the target message to be processed is reported to the controller for processing, so that the repeated reporting of the data packet input message can be reduced for the same message flow, the impact of a large number of data packet input messages on the controller is effectively reduced, and the accuracy and the efficiency of message processing are improved.
Optionally, the target message to be processed is a flow first packet of the target message flow, and the flow first packet is a first message to be processed in the target message flow; or, the target message to be processed is a flow tail packet of the target message flow, and the flow tail packet is the last message to be processed in the target message flow; or, the target message to be processed is the message to be processed with the smallest message size in the target message flow.
In this embodiment, the determined target to-be-processed packet is uploaded to the corresponding controller as a representative packet of the entire target packet stream for processing, so that the same packet stream can reduce repeated reporting of data packet input messages. On the basis, the first packet or the last packet of the target message flow is used as the target message to be processed, so that the target message to be processed and the corresponding first data packet input message which need to be reported can be determined from the target message flow quickly, and the reporting efficiency is improved. Or, the message to be processed with the smallest message size in the target message stream is used as the target message to be processed, so that the occupied physical space can be saved, and the longer transmission time can be avoided when the data packet input message corresponding to the target message to be processed is reported, thereby improving the message processing efficiency.
Example two
Fig. 3 is a flowchart of a message processing method according to a second embodiment of the present invention, which is detailed based on the foregoing embodiments. In this embodiment, a process of reporting a data packet input message of a target to-be-processed message based on a target message flow to a controller for processing to obtain a corresponding processing result, and a process of processing each to-be-processed message in the target message flow according to the corresponding processing result are specifically described. It should be noted that technical details that are not described in detail in the present embodiment may be referred to any of the above embodiments. As shown in fig. 3, the method includes:
s210, distributing the message flow to be processed according to the message characteristic information to obtain at least one target message flow, wherein the message characteristic information corresponding to the target message flow is used as a key value of the target message flow, and the at least one target message flow is cached in a local cache according to the corresponding key value.
In this embodiment, the target packet stream is a packet stream composed of a plurality of packets to be processed having the same packet feature information. For each target message flow, the target message flow may correspond to a message feature information, and the message feature information may be used as a KEY value (i.e., KEY) of the target message flow. The key value may be understood as a value used to characterize the target message flow. The key value corresponding to each target packet flow may be different.
The local buffer can be understood as a local buffer of the switch and can be used for buffering a large number of received messages to be processed; that is, it can be used to cache each target packet stream. Each target message flow can be cached in the local cache according to the corresponding key value, so that the corresponding target message flow can be positioned in the local cache according to the key value in the following process.
And S220, generating a corresponding first data packet input message based on a key value corresponding to the target message to be processed and a cache identifier, wherein the cache identifier is used for indicating a target message flow to which the target message to be processed belongs in the local cache.
It can be understood that, a target message stream to which a target to-be-processed message belongs corresponds to a key value, and the key value may also be considered as a key value corresponding to the target to-be-processed message. That is, each message to be processed (including the target message to be processed) in the target message stream to which the target message to be processed belongs may correspond to the same key value (i.e., the key value corresponding to the target message stream to which the target message to be processed belongs).
In this embodiment, the cache identifier may be used to indicate a target packet stream to which a target packet to be processed belongs in the local cache; that is to say, the target message flow to which the target message to be processed belongs can be searched in the local buffer by the buffer identifier. The cache ID is not particularly limited, and may be represented as a Buffer ID. The cache identifier corresponding to each target message to be processed may be different.
The first data packet input message may be understood as a data packet input message generated based on a key value corresponding to the target message to be processed and the cache identifier. Specifically, a corresponding first data packet input message may be generated based on a key value and a cache identifier corresponding to a target message to be processed; there is no particular limitation on how the first packet input message is generated; for example, a key value and a cache identifier corresponding to the target pending packet may be encapsulated in a packet input message to form a corresponding first packet input message.
S230, caching the first data packet input message to a message sending queue.
In this embodiment, the message sending queue may be understood as a message queue for storing a plurality of data packet input messages; a message queue may be a container that holds messages during the transmission of a message. The message sending queue can store a plurality of data packet input messages, the plurality of data packet input messages can be stored in the message sending queue according to a certain arrangement sequence, and the data packet input messages are sent according to the sequential arrangement sequence when being sent. For example, each data packet input message may include a priority identifier (i.e., a priority identifier is used to indicate an order in a message sending queue of the data packet input message, for example, the higher the priority, the earlier the order, where the priority identifier is not specifically limited), and in the process of buffering each data packet input message into the message sending queue, the order of each data packet input message in the message sending queue may be determined according to the priority identifier corresponding to the data packet input message.
Optionally, the message sending queue includes at least one data packet input message, and the data packet input message determines an arrangement order in the message sending queue according to the corresponding priority identifier.
It should be noted that each data packet input message may include a priority identifier, which may be considered as a priority identifier carried by a to-be-processed packet in each data packet input message.
Specifically, the first data packet input message may be buffered in the message sending queue, and in the buffering process, the arrangement order of the first data packet input message in the message sending queue may be determined according to the priority identifier corresponding to the first data packet input message.
S240, judging whether the number of the data packet input messages in the message sending queue reaches a set threshold value, if so, executing S250; otherwise, S260 is performed.
In this embodiment, the set threshold may be understood as a preset number threshold, which is not limited herein. If the number of the data packet input messages in the text transmission queue reaches the set threshold, S250 may be executed; if the number of data packet input messages in the text transmission queue does not reach the set threshold, S260 may be performed.
And S250, reporting the data packet input messages in the set number of message sending queues to the controller for processing each time to obtain corresponding processing results, and continuing to execute S270.
In this embodiment, the set number may be understood as a preset number, such as 100, 200, or 300.
The data packet input messages in a set number of message sending queues can be reported to the controller for processing each time, and corresponding processing results are obtained; the processing result can be a data packet output message corresponding to each data packet input message; it is understood that the processing result may include a first packet output message corresponding to the first packet input message. The first packet output message may be understood as a processing result obtained by the controller performing corresponding parsing processing based on the first packet input message of the received packet input message.
For example, in order to avoid a large number of data packet input messages being reported to the controller at one time to cause impact on the controller, a Meter value may be set to limit the reporting speed of the data packet input messages, where the speed limit may be considered as limiting the number of data packet input messages being reported each time, and if the Meter value is 100, 100 data packet input messages may be reported to the controller each time; the Meter value can be considered to be the set number. Specifically, when reporting and sending the data packet input messages in the message sending queue to the controller, only a set number of data packet input messages may be reported to the controller each time (i.e., the data packet input messages may be reported to the controller according to the Meter value), and the set number of data packet input messages are selected from front to back according to the arrangement sequence.
It can be understood that, the processing result may further include a flow table issued by the controller based on the target to-be-processed packet, and the flow table (including a table entry of the flow table) may be included in the first data packet output message or may not be included in the first data packet output message; the flow table is used for subsequent processing of the same type of packet flow as the target packet flow.
And S260, reporting all data packet input messages in the message sending queue to a controller for processing, obtaining data packet output messages corresponding to all data packet input messages in all data packet input messages as processing results, and continuously executing S270.
If the number of the data packet input messages in the message sending queue does not reach the set threshold value, all the data packet input messages in the message sending queue can be reported to the controller for processing, and the data packet output message corresponding to each data packet input message in all the data packet input messages is obtained as a processing result. It is understood that the processing result may include a first packet output message corresponding to the first packet input message.
S270, determining a first data packet output message corresponding to the first data packet input message from the processing result according to the key value corresponding to the target message to be processed.
It is understood that the first packet input message includes a key value, and the first packet output message corresponding to the first packet input message may also include a key value. On this basis, the first packet output message corresponding to the first packet input message may be determined from the processing result according to the key value in the first packet output message.
S280, determining a target message flow to which the target message to be processed belongs according to the cache identifier in the first data packet output message.
It can be understood that, if the first packet input message includes the buffer identifier, the first packet output message corresponding to the first packet input message may also include the buffer identifier. On this basis, the target packet flow to which the target to-be-processed packet corresponding to the first data packet input message belongs may be determined according to the cache identifier in the first data packet output message, that is, the target packet flow to which the target to-be-processed packet belongs may be found from the local buffer according to the indication of the cache identifier.
S290, outputting each message to be processed in the target message stream to which the target message to be processed belongs according to the processing instruction in the first data packet output message.
In this embodiment, the processing instruction may be understood as an instruction or an instruction set for instructing how to process the corresponding message to be processed. The Instructions of the process can be considered as Instructions in the above.
It can be understood that the target message to be processed and the other messages to be processed in the target message stream to which the target message to be processed belongs have the same message characteristic information, and therefore the messages to be processed can be regarded as the same type of message. Therefore, the first data packet output message corresponding to the target message to be processed can be used for processing other messages to be processed which belong to the same class as the target message to be processed. On the basis, each message to be processed in the target message flow to which the target message to be processed belongs can be processed according to the processing instruction in the first data packet output message.
The second embodiment of the invention provides a message processing method, which embodies the process of reporting the data packet input message of a target message to be processed based on a target message flow to a controller for processing to obtain a corresponding processing result, and the process of processing each message to be processed in the target message flow according to the corresponding processing result. According to the method, the corresponding first data packet input message is generated based on the key value corresponding to the target message to be processed and the cache identifier, so that the phenomenon that the target message to be processed occupies a longer transmission time due to the fact that the target message to be processed is too large can be avoided, and the transmission efficiency of the first data packet input message is improved.
In the alternative,
reporting a data packet input message of a target message to be processed based on a target message flow to a controller for processing to obtain a corresponding processing result, wherein the processing result comprises the following steps: generating a corresponding second data packet input message based on the target message to be processed; caching the second data packet input message to a message sending queue; if the number of the data packet input messages in the message sending queue does not reach a set threshold value, reporting all the data packet input messages in the message sending queue to a controller for processing, and obtaining data packet output messages corresponding to all the data packet input messages in all the data packet input messages as processing results; if the number of the data packet input messages in the message sending queue reaches a set threshold value, reporting the data packet input messages in the message sending queue with the set number to a controller for processing each time to obtain a corresponding processing result; and the processing result comprises a second data packet output message corresponding to the second data packet input message. .
On the basis of the above embodiment, the second packet input message may be understood as a packet input message generated based on the target pending message. Specifically, a corresponding second data packet input message may be generated based on the target message to be processed; there is no particular limitation on how the second packet input message is generated; for example, the entire target pending message may be encapsulated in one packet input message to form a corresponding second packet input message.
It can be understood that, if the target packet to be processed is small or there is no extra space in the local buffer, the target packet to be processed may not be required to be cached in the local buffer at this time, and a corresponding second data packet input message may be generated based on the entire target packet to be processed and reported to the controller for processing.
The second data packet input message may be buffered in the message sending queue, and in the buffering process, the order of the second data packet input message in the message sending queue may be determined according to the priority identifier corresponding to the second data packet input message.
If the number of the data packet input messages in the message sending queue does not reach the set threshold value, reporting all the data packet input messages in the message sending queue to a controller for processing to obtain data packet output messages corresponding to all the data packet input messages in all the data packet input messages as processing results; if the number of the data packet input messages in the message sending queue reaches the set threshold value, the data packet input messages in the message sending queue with the set number can be reported to the controller for processing each time, and the corresponding processing result is obtained. It will be appreciated that the processing result may include a second packet output message corresponding to the second packet input message. The second packet output message may be understood as a processing result obtained by the controller performing corresponding parsing processing based on the second packet input message in the received packet input messages. There is no particular limitation on how the controller processes the second packet incoming message.
Optionally, the message characteristic information corresponding to the target message flow is used as a key value of the target message to be processed, and at least one target message flow is cached in the local cache according to the corresponding key value;
processing each message to be processed in the target message flow according to the corresponding processing result, comprising: determining a second data packet output message corresponding to the second data packet input message from a processing result according to a key value corresponding to the target message to be processed, and searching a target message stream to which the target message to be processed belongs from a local buffer; and processing each message to be processed in the target message flow to which the target message to be processed belongs according to the processing instruction in the second data packet output message.
It can be understood that, since the second data packet input message includes the key value of the target to-be-processed message, the corresponding second data packet output message may also include the key value of the target to-be-processed message, on this basis, the second data packet output message corresponding to the second data packet input message may be determined from the processing result according to the key value corresponding to the target to-be-processed message, and after the second data packet output message is obtained, the target message stream to which the target to-be-processed message belongs may be determined according to the key value in the second data packet output message. The target message flow in the local buffer is cached according to the corresponding key value, so that the target message flow to which the target message to be processed belongs can be searched from the local buffer according to the key value. On the basis, each message to be processed (including the target message to be processed) in the target message flow to which the target message to be processed belongs can be processed according to the processing instruction in the second data packet output message.
The present invention is exemplified below.
The embodiment of the invention provides a method for processing a Table (Business) message of an Openflow switch, which comprises the following steps: determining an element KEY (namely a KEY value) of a stream (namely a target message stream), caching a Table Miss message by using the flow KEY, uploading a message first Packet (namely the stream first Packet of the target message stream) of the same flow, and shaping and uploading all the uploaded Packet-in messages according to a Meter (namely, reporting data Packet input messages in a set number of message sending queues to a controller for processing each time to obtain a corresponding processing result); and processing the message and the message of the same flow in the cache according to the Packet-out message sent by the controller (namely, processing each message to be processed in the target message flow according to the corresponding processing result aiming at each target message flow), and processing the follow current according to the issued flow table.
Five-tuple (such as source IP address, source port, destination IP address, destination port and transport layer protocol) and other message information (i.e. message characteristic information) can be used as the flow KEY; when the Controller requires the Openflow switch to set a Buffer (i.e., a local Buffer) of a message to only carry the message content of a limited maximum length (realized by setting the transmission message MAX _ LEN) and the Buffer ID (i.e., a Buffer identifier) thereof to transmit a Packet _ in message of Table Miss, a message header (i.e., requiring the length content of the transmission message MAX _ LEN) may be used as a KEY to distinguish flows (i.e., shunting the message flow to be processed according to the message characteristic information to obtain at least one target message flow). Caching the messages according to KEY, putting the flow head Packet into a Packet _ in message sending queue of Table Miss (namely, caching the data Packet input message into the message sending queue), and carrying out shaping sending according to a Meter value (namely, a set number) (namely, reporting the data Packet input message in the set number of message sending queues to a controller for processing each time).
The switch receives a Packet-out message sent by the controller, if the Buffer _ ID is designated for forwarding, a flow to which the message belongs is searched (i.e., a target message flow to which a flow head Packet belongs is determined according to a cache identifier in a first data Packet output message), and all caches of the flow are forwarded according to an instruction of the Packet-out message (i.e., each message to be processed in the target message flow to which the flow head Packet belongs is processed according to a processing instruction in the first data Packet output message). If not, calculating a message KEY according to the message, searching the flow to which the message belongs according to the KEY (namely determining a KEY value corresponding to a flow head Packet; searching the target message flow to which the flow head Packet belongs from a local Buffer according to the KEY value), and forwarding all buffers of the flow according to Packet-out indication (namely processing each message to be processed in the target message flow to which the flow head Packet belongs according to a processing instruction in a second data Packet output message).
The invention divides the flow of the Table Miss message to report the Packet-in message, reduces the repeated report of the same flow, ensures that the flow can be reported by the Packet-in message to obtain the timely processing of a Controller, and quickly enters a forwarding state.
Fig. 4 is a schematic diagram illustrating an implementation of a message processing method according to a second embodiment of the present invention. As shown in fig. 4, the specific implementation process of the message processing method is as follows:
s310, the exchanger receives the message through the exchanger inlet.
The switch entry is a port for transmitting the message stream corresponding to the service.
S320, judging whether the message is not matched with the corresponding flow Table item (namely whether the message is a Table mistake); if yes, go to S330; otherwise, S360 is performed.
S330, extracting the KEY of the message, and caching the message into the Buffer according to the KEY.
S340, judging whether the message is a flow head packet of the corresponding message flow, if so, executing S350; otherwise, S370 is performed.
And S350, caching Packet _ In information corresponding to the message into a Packet _ In message sending queue, carrying out shaping sending according to the Meter value, and continuously executing S370.
And S360, forwarding the message according to the indication of the flow table entry, and continuing to execute S370.
And S370, ending.
Fig. 5 is a schematic diagram illustrating an implementation of another message processing method according to a second embodiment of the present invention. As shown in fig. 5, the specific implementation process of the message processing method is as follows:
s410, the switch receives the Packet-out message through the switch management entrance.
The switch manages the ingress, i.e., the port that is communicatively coupled to the controller.
S420, judging whether the Packet-out message designates Buffer _ ID forwarding or not; if yes, go to S430; otherwise, S440 is performed.
S430, searching the flow to which the message indicated by the Buffer _ ID belongs from the Buffer, and continuing to execute S450.
S440, calculating a KEY of the message according to the message, searching the flow to which the message belongs according to the KEY, and continuing to execute S450.
S450, forwarding all messages to be processed in the flow to which the messages belong according to the indication of the Packet-out message.
And S460, ending.
EXAMPLE III
Fig. 6 is a schematic structural diagram of a message processing apparatus according to a third embodiment of the present invention. As shown in fig. 6, the apparatus includes:
the distribution module 510 is configured to distribute a to-be-processed packet flow according to packet characteristic information to obtain at least one target packet flow, where the target packet flow is a packet flow composed of multiple to-be-processed packets with the same packet characteristic information, and the target packet flow corresponds to one packet characteristic information.
A reporting module 520, configured to report, for each target packet flow, a packet input message of a target to-be-processed packet based on the target packet flow to a controller for processing, so as to obtain a corresponding processing result, where the target to-be-processed packet is a to-be-processed packet in the target packet flow;
the processing module 530 is configured to, for each target packet stream, process each to-be-processed packet in the target packet stream according to the corresponding processing result.
In the message processing apparatus provided in the third embodiment of the present invention, first, a splitting module 510 is used to split a to-be-processed message flow according to message characteristic information, so as to obtain at least one target message flow, where the target message flow is a message flow composed of a plurality of to-be-processed messages having the same message characteristic information, and the target message flow corresponds to one message characteristic information; then, for each target message flow, reporting a data packet input message of a target to-be-processed message based on the target message flow to a controller for processing by using a reporting module 520, and obtaining a corresponding processing result, wherein the target to-be-processed message is one to-be-processed message in the target message flow; finally, through the processing module 530, for each target packet stream, each to-be-processed packet in the target packet stream is processed according to the corresponding processing result. The device can be used for conveniently classifying and processing each shunted target message flow by shunting the message flow to be processed; and only the data packet input message based on the target message to be processed is reported to the controller for processing, so that the repeated reporting of the data packet input message can be reduced for the same message flow, the impact of a large number of data packet input messages on the controller is effectively reduced, and the accuracy and the efficiency of message processing are improved.
Optionally, the message characteristic information corresponding to the target message flow is used as a key value of the target message flow, and the at least one target message flow is cached in a local cache according to the corresponding key value;
the reporting module 520 includes:
a first generating unit, configured to generate a corresponding first data packet input message based on a key value corresponding to the target to-be-processed packet and a cache identifier, where the cache identifier is used to indicate a target packet stream to which the target to-be-processed packet belongs in the local cache;
the first buffer unit is used for buffering the first data packet input message to a message sending queue;
a first reporting unit, configured to report all data packet input messages in the message sending queue to the controller for processing if the number of the data packet input messages in the message sending queue does not reach a set threshold, and obtain a data packet output message corresponding to each data packet input message in all the data packet input messages as a processing result;
a second reporting unit, configured to report a set number of data packet input messages in the message sending queue to the controller for processing each time if the number of the data packet input messages in the message sending queue reaches a set threshold, so as to obtain a corresponding processing result;
wherein the processing result includes a first packet output message corresponding to the first packet input message.
Optionally, the processing module 530 includes:
a first determining unit, configured to determine, according to a key value corresponding to the target packet to be processed, a first packet output message corresponding to the first packet input message from the processing result;
a second determining unit, configured to determine, according to the cache identifier in the first data packet output message, a target packet stream to which the target packet to be processed belongs;
and the first processing unit is used for processing each message to be processed in the target message flow to which the target message to be processed belongs according to the processing instruction in the first data packet output message.
Optionally, the reporting module 520 includes:
a second generating unit, configured to generate a corresponding second data packet input message based on the target message to be processed;
the second buffer unit is used for buffering the second data packet input message to a message sending queue;
a third reporting unit, configured to report all data packet input messages in the message sending queue to the controller for processing if the number of the data packet input messages in the message sending queue does not reach a set threshold, and obtain a data packet output message corresponding to each data packet input message in all the data packet input messages as a processing result;
a fourth reporting unit, configured to report a set number of data packet input messages in the message sending queue to the controller for processing each time if the number of the data packet input messages in the message sending queue reaches a set threshold, so as to obtain a corresponding processing result;
wherein the processing result includes a second packet output message corresponding to the second packet input message.
Optionally, the message characteristic information corresponding to the target message flow is used as a key value of the target message to be processed, and the at least one target message flow is cached in a local cache according to the corresponding key value;
the processing module 530 includes:
and the searching unit is used for determining a second data packet output message corresponding to the second data packet input message from the processing result according to the key value corresponding to the target message to be processed, determining a second data packet output message corresponding to the second data packet input message from the processing result, and searching a target message stream to which the target message to be processed belongs from the local buffer.
And the second processing unit is used for processing each message to be processed in the target message flow to which the target message to be processed belongs according to the processing instruction in the second data packet output message.
Optionally, the message sending queue includes at least one data packet input message, and the data packet input message determines an arrangement order in the message sending queue according to the corresponding priority identifier.
Optionally, the target message to be processed is a flow head packet of the target message flow, and the flow head packet is a first message to be processed in the target message flow;
or, the target message to be processed is a flow tail packet of the target message flow, and the flow tail packet is the last message to be processed in the target message flow;
or, the target message to be processed is a message to be processed with the smallest message size in the target message flow.
The message processing device provided by the embodiment of the invention can execute the message processing method provided by any embodiment of the invention, and has the corresponding functional module and beneficial effect of the execution method.
Example four
Fig. 7 is a schematic structural diagram of a switch according to a fourth embodiment of the present invention. A switch is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The switch may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 7, the switch 10 includes at least one processor 11, and a memory communicatively connected to the at least one processor 11, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, and the like, wherein the memory stores a computer program executable by the at least one processor, and the processor 11 can perform various suitable actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from a storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data necessary for the operation of the switch 10 can also be stored. The processor 11, the ROM 12, and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
A number of components in switch 10 are connected to I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, or the like; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the switch 10 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The processor 11 performs the various methods and processes described above, such as the message processing method.
In some embodiments, the message processing method may be implemented as a computer program tangibly embodied in a computer-readable storage medium, such as storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed on switch 10 via ROM 12 and/or communication unit 19. When the computer program is loaded into the RAM 13 and executed by the processor 11, one or more steps of the message processing method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the message processing method in any other suitable way (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Computer programs for implementing the methods of the present invention can be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. A computer program can execute entirely on a machine, partly on a machine, as a stand-alone software package partly on a machine and partly on a remote machine or entirely on a remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the Internet.
The computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome.
EXAMPLE five
Fig. 8 is a schematic structural diagram of a message processing system according to a fifth embodiment of the present invention. As shown in fig. 8, the message processing system includes: the switch 610 provided by the embodiment of the present invention, and a controller 620 communicatively connected to the switch 610;
the switch 610 shunts a to-be-processed message stream according to message characteristic information to obtain at least one target message stream, where the target message stream is a message stream composed of multiple to-be-processed messages having the same message characteristic information, and the target message stream corresponds to one message characteristic information;
for each target message flow, the switch 610 reports a data packet input message of a target to-be-processed message based on the target message flow to the controller 620 for processing, and obtains a corresponding processing result, wherein the target to-be-processed message is one to-be-processed message in the target message flow;
for each target message flow, the switch 610 processes each message to be processed in the target message flow according to the corresponding processing result.
The message processing system provided in the fifth embodiment may be configured to execute the message processing method provided in any of the above embodiments, and has corresponding functions and beneficial effects.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present invention may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solution of the present invention can be achieved.
The above-described embodiments should not be construed as limiting the scope of the invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (11)

1. A message processing method is applied to a switch, and the method comprises the following steps:
shunting a message flow to be processed according to message characteristic information to obtain at least one target message flow, wherein the target message flow is a message flow consisting of a plurality of messages to be processed with the same message characteristic information, and the target message flow corresponds to one message characteristic information;
for each target message flow, reporting a data packet input message of a target message to be processed based on the target message flow to a controller for processing to obtain a corresponding processing result, wherein the target message to be processed is one message to be processed in the target message flow;
and aiming at each target message flow, processing each message to be processed in the target message flow according to the corresponding processing result.
2. The method of claim 1, wherein message characteristic information corresponding to the target message flow is used as a key value of the target message flow, and the at least one target message flow is cached in a local cache according to the corresponding key value;
the reporting of the data packet input message of the target to-be-processed message based on the target message stream to a controller for processing to obtain a corresponding processing result includes:
generating a corresponding first data packet input message based on a key value corresponding to the target message to be processed and a cache identifier, wherein the cache identifier is used for indicating a target message stream to which the target message to be processed belongs in the local cache;
caching the first data packet input message to a message sending queue;
if the number of the data packet input messages in the message sending queue does not reach a set threshold value, reporting all the data packet input messages in the message sending queue to the controller for processing, and obtaining data packet output messages corresponding to all the data packet input messages in all the data packet input messages as processing results;
if the number of the data packet input messages in the message sending queue reaches a set threshold value, reporting the data packet input messages in the message sending queue with the set number to the controller for processing each time, and obtaining a corresponding processing result;
wherein the processing result includes a first packet output message corresponding to the first packet input message.
3. The method of claim 2, wherein the processing each message to be processed in the target message flow according to the corresponding processing result comprises:
determining a first data packet output message corresponding to the first data packet input message from the processing result according to a key value corresponding to the target message to be processed;
determining a target message flow to which the target message to be processed belongs according to the cache identifier in the first data packet output message;
and processing each message to be processed in the target message flow to which the target message to be processed belongs according to the processing instruction in the first data packet output message.
4. The method of claim 1,
the reporting of the data packet input message of the target to-be-processed message based on the target message stream to a controller for processing to obtain a corresponding processing result includes:
generating a corresponding second data packet input message based on the target message to be processed;
caching the second data packet input message to a message sending queue;
if the number of the data packet input messages in the message sending queue does not reach a set threshold value, reporting all the data packet input messages in the message sending queue to the controller for processing, and obtaining data packet output messages corresponding to all the data packet input messages in all the data packet input messages as processing results;
if the number of the data packet input messages in the message sending queue reaches a set threshold value, reporting the data packet input messages in the message sending queue with the set number to the controller for processing each time, and obtaining a corresponding processing result;
wherein the processing result includes a second packet output message corresponding to the second packet input message.
5. The method according to claim 4, wherein message characteristic information corresponding to the target message flow is used as a key value of the target message to be processed, and the at least one target message flow is cached in a local cache according to the corresponding key value;
the processing each message to be processed in the target message flow according to the corresponding processing result comprises:
determining a second data packet output message corresponding to the second data packet input message from the processing result according to a key value corresponding to the target message to be processed, and searching a target message stream to which the target message to be processed belongs from the local cache;
and processing each message to be processed in the target message flow to which the target message to be processed belongs according to the processing instruction in the second data packet output message.
6. The method according to any of claims 2-5, wherein the messaging queue comprises at least one data packet input message, the data packet input message determining an order of arrangement in the messaging queue according to a corresponding priority identifier.
7. The method according to claim 1, wherein the target packet to be processed is a head packet of the target packet flow, and the head packet is a first packet to be processed in the target packet flow;
or, the target message to be processed is a flow tail packet of the target message flow, and the flow tail packet is the last message to be processed in the target message flow;
or, the target message to be processed is a message to be processed with the smallest message size in the target message flow.
8. A message processing apparatus, comprising:
the distribution module is used for distributing message flows to be processed according to message characteristic information to obtain at least one target message flow, wherein the target message flow is a message flow consisting of a plurality of messages to be processed with the same message characteristic information, and the target message flow corresponds to one message characteristic information;
a reporting module, configured to report, for each target packet flow, a packet input message of a target to-be-processed packet based on the target packet flow to a controller for processing, so as to obtain a corresponding processing result, where the target to-be-processed packet is one of the target packet flows;
and the processing module is used for processing each message to be processed in each target message flow according to the corresponding processing result.
9. A switch, characterized in that the switch comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the message processing method of any of claims 1-7.
10. A message processing system, the system comprising: the switch of claim 9, and a controller communicatively coupled to the switch.
11. A computer-readable storage medium storing computer instructions for causing a processor to perform the message processing method of any one of claims 1-7 when executed.
CN202210877676.3A 2022-07-25 2022-07-25 Message processing method, device, switch, system and medium Pending CN115225578A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210877676.3A CN115225578A (en) 2022-07-25 2022-07-25 Message processing method, device, switch, system and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210877676.3A CN115225578A (en) 2022-07-25 2022-07-25 Message processing method, device, switch, system and medium

Publications (1)

Publication Number Publication Date
CN115225578A true CN115225578A (en) 2022-10-21

Family

ID=83614084

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210877676.3A Pending CN115225578A (en) 2022-07-25 2022-07-25 Message processing method, device, switch, system and medium

Country Status (1)

Country Link
CN (1) CN115225578A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105359472A (en) * 2014-05-16 2016-02-24 华为技术有限公司 Data processing method and apparatus for OpenFlow network
CN107733799A (en) * 2016-08-11 2018-02-23 新华三技术有限公司 A kind of message transmitting method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105359472A (en) * 2014-05-16 2016-02-24 华为技术有限公司 Data processing method and apparatus for OpenFlow network
CN107733799A (en) * 2016-08-11 2018-02-23 新华三技术有限公司 A kind of message transmitting method and device

Similar Documents

Publication Publication Date Title
US9571403B2 (en) Packet marking for flow management, including deadline aware flow management
US10333848B2 (en) Technologies for adaptive routing using throughput estimation
CN108243116B (en) Flow control method and switching equipment
CN101827073B (en) Tracking fragmented data flows
US20170180235A1 (en) Technologies for sideband performance tracing of network traffic
US9014005B2 (en) Low-latency lossless switch fabric for use in a data center
CN105684382A (en) Packet control method, switch and controller
US10305805B2 (en) Technologies for adaptive routing using aggregated congestion information
CN110808854B (en) Message scheduling method and device and switch
US10389636B2 (en) Technologies for adaptive routing using network traffic characterization
WO2021208682A1 (en) Data sampling method, apparatus and device for network device, and medium
CN103685058B (en) Method for controlling QoS (Quality of Service) of stream data, and OpenFlow controller
Lu et al. TF-IdleTimeout: Improving efficiency of TCAM in SDN by dynamically adjusting flow entry lifecycle
CN110912826A (en) Method and device for expanding IPFIX table items by using ACL
CN114428711A (en) Data detection method, device, equipment and storage medium
CN115225578A (en) Message processing method, device, switch, system and medium
CN116599839A (en) Cloud gateway system, data processing method, device and storage medium
CN109379163B (en) Message forwarding rate control method and device
CN114567687B (en) Message forwarding method, device, equipment, medium and program product
US10571988B1 (en) Methods and apparatus for clock gating processing modules based on hierarchy and workload
Wijekoon et al. High performance flow matching architecture for OpenFlow data plane
CN116708327A (en) Spontaneous packet forwarding method and device, electronic equipment and storage medium
US11876724B2 (en) Flow characteristic extraction method and apparatus
CN115988574B (en) Data processing method, system, equipment and storage medium based on flow table
CN114338543B (en) Network access speed limiting method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination