CN111131499A - Concurrent and asynchronous task processing method and device thereof - Google Patents
Concurrent and asynchronous task processing method and device thereof Download PDFInfo
- Publication number
- CN111131499A CN111131499A CN201911413124.1A CN201911413124A CN111131499A CN 111131499 A CN111131499 A CN 111131499A CN 201911413124 A CN201911413124 A CN 201911413124A CN 111131499 A CN111131499 A CN 111131499A
- Authority
- CN
- China
- Prior art keywords
- client
- reactivor
- request
- thread
- connection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000003672 processing method Methods 0.000 title abstract description 7
- 238000000034 method Methods 0.000 claims abstract description 132
- 230000008569 process Effects 0.000 claims abstract description 110
- 238000012545 processing Methods 0.000 claims abstract description 62
- 230000004044 response Effects 0.000 claims abstract description 16
- 238000012544 monitoring process Methods 0.000 claims description 71
- 238000004891 communication Methods 0.000 claims description 17
- 230000008859 change Effects 0.000 claims description 5
- 230000005540 biological transmission Effects 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 9
- 238000012546 transfer Methods 0.000 description 4
- 230000011664 signaling Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/02—Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5072—Grid computing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/16—Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
- H04L69/161—Implementation details of TCP/IP or UDP/IP stack architecture; Specification of modified or new header fields
- H04L69/162—Implementation details of TCP/IP or UDP/IP stack architecture; Specification of modified or new header fields involving adaptations of sockets based mechanisms
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/22—Parsing or analysis of headers
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Security & Cryptography (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Computer And Data Communications (AREA)
Abstract
The invention discloses a concurrent and asynchronous task processing method and equipment. Through a multithread Reactor and multiprocess Worker framework, a reactivor based on epoll can process unlimited number of connection requests, so that the concurrent processing capacity of a server side for responding to a client request can be improved; by simultaneously creating two processes, namely a common worker process and a task worker process, the worker process is used for processing a request which does not consume too long time, the task worker process is used for processing the request which consumes longer time, all asynchronous tasks can be averagely distributed to the task process to be executed, and the task response processing speed at one side of the server is further improved.
Description
Technical Field
The invention relates to a concurrent and asynchronous task processing method for a server side, and simultaneously relates to computer equipment for concurrent and asynchronous task processing of the server side, belonging to the technical field of computing communication.
Background
In the cloud voice call system, a large number of requests from the client end arrive at one end of the server at the same time or in a very short time, each request needs the server end to consume resources for processing, corresponding feedback is made, and feedback data are returned to the client end sending the request.
From the perspective of the server, the high-concurrency server needs to consume resources of the server, such as the number of processes that can be started simultaneously, the number of threads that can be run simultaneously, the number of network connections, a CPU, I/O, memory, and the like, and since resources at one end of the server are limited, the requests that the server can process simultaneously are also limited; the processing and response at the server side become slower and even a part of the request is discarded and is not processed, and the server side is crashed seriously. Therefore, a solution for rapidly processing highly concurrent client requests is needed.
Disclosure of Invention
The invention provides a concurrent and asynchronous task processing method for a server side.
Another object of the present invention is to provide a computer device for concurrent and asynchronous task processing.
In order to achieve the purpose, the invention adopts the following technical scheme:
according to a first aspect of the embodiments of the present invention, there is provided a method for processing concurrent and asynchronous tasks at a server, including the following steps:
receiving a request sent by a client through a main reactivor thread, wherein the main reactivor thread corresponds to a reactivor thread group consisting of a plurality of reactivor threads; distributing the request of the client to a first reactivor thread in the reactivor thread group, wherein the first reactivor thread is any one of the reactivor threads in the reactivor thread group, and is based on epoll and configured to monitor the change of the client; the first reactivor thread analyzes the request of the client according to a preset protocol and then transmits the request to a worker process through a pipe, and the worker process is configured to process the request of the client with the time consumption smaller than a preset threshold; and the first reactivor thread receives a processing result of the worker process to the client request, and feeds back the processing result to the client sending the request through a socket according to a preset protocol.
In some embodiments of the invention, the method further comprises: creating a task worker process different from the worker process, the task worker process configured to process client requests that consume more than the predetermined threshold.
In some embodiments of the invention, the method further comprises: receiving a first message and a first connection of a first client through a first ws work process in a plurality of ws work processes, wherein the first ws work process is any one work process in the plurality of ws work processes, and the first client is any one client in the clients; sending the first message to a proxy server so that the proxy server can make a first response message aiming at the first message, and putting the first connection into a connection pool, wherein the connection pool is configured to maintain a plurality of connections processed by the plurality of ws working processes; the proxy server transmits the first response message to a first connection in the connection pool through a first monitoring process by interprocess communication.
In some embodiments of the invention, the method further comprises: the proxy server sends the detected event of the terminal equipment different from the client terminal to a tcp client terminal monitoring interface as callback data; and the tcp client monitoring interface returns the callback data to the client through interprocess communication between a second monitoring process and the first monitoring process.
In some embodiments of the present invention, a first connection in the connection pool is identified by < first connection, first monitoring process >, and the first monitoring process is configured to perform data transfer with other processes through a channel of the first connection.
In some embodiments of the invention, the callback data is sent to the tcp client listening interface based on a callback address connection provided for the ws server.
In some embodiments of the present invention, the tcp client listening interface returning the callback data to the client via inter-process communication between a second monitoring process and the first monitoring process, includes: the ws server receives request data of the tcp client monitoring interface, wherein the request data is associated with the callback data, establishes a second monitoring process and a second connection according to the request data, and adopts < second connection, second monitoring process > to identify the second connection, wherein the second monitoring process and the first monitoring process are both arranged in the connection pool; and receiving the callback data through the second monitoring process, and pushing the callback data to the client through the first monitoring process.
In some embodiments of the invention, the client request comprises an input/output request for a database.
According to a second aspect of embodiments of the present invention, there is provided a computer device for concurrent and asynchronous task processing, comprising a memory and a processor, wherein the memory is configured to store computer instructions; the processor configured to execute the computer instructions to cause the computer device to perform: receiving a request sent by a client through a main reactivor thread, wherein the main reactivor thread corresponds to a reactivor thread group consisting of a plurality of reactivor threads; distributing the request of the client to a first reactivor thread in the reactivor thread group, wherein the first reactivor thread is any one of the reactivor threads in the reactivor thread group, and is based on epoll and configured to monitor the change of the client; the first reactivor thread analyzes the request of the client according to a preset protocol and then transmits the request to a worker process through a pipe, and the worker process is configured to process the request of the client with the time consumption smaller than a preset threshold; and the first reactivor thread receives a processing result of the worker process to the client request, and feeds back the processing result to the client sending the request through a socket according to a preset protocol.
In some embodiments of the invention, the processor is further configured to execute the computer instructions to cause the computer device to perform: creating a task worker process different from the worker process, the task worker process configured to process client requests that consume more than the predetermined threshold.
In some embodiments of the invention, the processor is further configured to execute the computer instructions to cause the computer device to perform: receiving a first message and a first connection of a first client through a first ws work process in a plurality of ws work processes, wherein the first ws work process is any one work process in the plurality of ws work processes, and the first client is any one client in the clients; sending the first message to a proxy server so that the proxy server can make a first response message aiming at the first message, and putting the first connection into a connection pool, wherein the connection pool is configured to maintain a plurality of connections processed by the plurality of ws working processes; the proxy server transmits the first response message to a first connection in the connection pool through a first monitoring process by interprocess communication.
In some embodiments of the invention, the processor is further configured to execute the computer instructions to cause the computer device to perform: the proxy server sends the detected event of the terminal equipment different from the client terminal to a tcp client terminal monitoring interface as callback data; and the tcp client monitoring interface returns the callback data to the client through interprocess communication between a second monitoring process and the first monitoring process.
In some embodiments of the present invention, a first connection in the connection pool is identified by < first connection, first monitoring process >, and the first monitoring process is configured to perform data transfer with other processes through a channel of the first connection.
In some embodiments of the invention, the processor is further configured to execute the computer instructions to cause the computer device to perform: the callback data is sent to the tcp client listening interface according to a callback address connection provided for the ws server.
In some embodiments of the present invention, the tcp client listening interface returning the callback data to the client via inter-process communication between a second monitoring process and the first monitoring process, includes: the ws server receives request data of the tcp client monitoring interface, wherein the request data is associated with the callback data, establishes a second monitoring process and a second connection according to the request data, and adopts < second connection, second monitoring process > to identify the second connection, wherein the second monitoring process and the first monitoring process are both arranged in the connection pool; and receiving the callback data through the second monitoring process, and pushing the callback data to the client through the first monitoring process.
In some embodiments of the invention, the client request comprises an input/output request of a database, which is handled by the task worker process.
Compared with the prior art, the concurrent and asynchronous task processing method provided by the embodiment of the invention can process unlimited number of connection requests based on epoll through a multithreading Reactor and multiprocess Worker architecture, thereby improving the concurrent processing capability of the server side for responding to the client request; by simultaneously creating two processes, namely a common worker process and a task worker process, the worker process is used for processing a request which does not consume too long time, the task worker process is used for processing the request which consumes longer time, all asynchronous tasks can be averagely distributed to the task process to be executed, and the task response processing speed at one side of the server is further improved.
Drawings
FIG. 1 shows a flow diagram for concurrent and asynchronous task processing, according to an embodiment of the invention.
FIG. 2 shows a task processing architecture diagram according to an embodiment of the invention.
Fig. 3 shows a flow diagram of determination of active connections according to an embodiment of the invention.
Fig. 4 shows a workflow diagram of the server side according to an embodiment of the invention.
FIG. 5 shows a data processing flow diagram according to an embodiment of the invention.
FIG. 6 shows another data processing flow diagram according to an embodiment of the invention.
Fig. 7 shows a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
The technical contents of the invention are described in detail below with reference to the accompanying drawings and specific embodiments.
It is noted that well-known modules, units and their mutual connections, links, communications or operations are not shown or described in detail. Also, the described features, architectures, or functions may be combined in any manner in one or more embodiments. It will be understood by those skilled in the art that the various embodiments described below are illustrative only and are not intended to limit the scope of the present invention. It will also be readily understood that the modules or units or processes of the embodiments described herein and illustrated in the figures can be combined and designed in a wide variety of different configurations.
In the following, briefly describing a scenario related to an embodiment of the present invention, in a cloud voice call system, a client is installed on a terminal device of a seat or a customer service, and communicates with a server through a browser-related protocol. The client is displayed on the terminal device by a browser page, and the communication between the agent and the client is also realized on the browser client. On one hand, the server side provides data request and analysis services related to the call for the front-end browser client, and on the other hand, the server side also provides related services such as query and storage of called client information related to the call for the front-end browser client.
Referring to FIG. 1, FIG. 1 shows a flow diagram for concurrent and asynchronous task processing, according to an embodiment of the invention. The method for processing the concurrent and asynchronous tasks provided by the embodiment of the invention can comprise the steps of S101, S102, S103 and S104. It should be noted that the embodiments of the present invention are not limited to the four steps, and in other embodiments of the present invention, additional steps, modifications or replacements of the steps described above, and the like may also be included.
S101, receiving a request sent by a client through a main reactivor thread, wherein the main reactivor thread corresponds to a reactivor thread group consisting of a plurality of reactivors.
The embodiment of the invention adopts a progress model based on Swoole. In a particular embodiment, the swoole configuration items may include: a reactivor _ num: 4; the following steps are described: the thread number of a reactivor is specified; description of the drawings: the number of event processing threads in the main process is set, the number of the CPU cores which are the same as the number of the event processing threads can be started by default, the number of the CPU cores is generally set to be 1-4 times of the number of the CPU cores, and the number of the CPU cores is not operated to be 4 times of the maximum number of the CPU cores.
As shown in fig. 2, in the embodiment of the present invention, a main reactivor (main reactivor) thread and a plurality of reactivor threads are created, and the plurality of reactivor threads form a reactivor group. Each reactivor thread is working on epoll. Besides, a preset number of worker processes and task worker processes can be created, and the number of the worker processes and the task worker processes can be created according to actual needs. The main thread is configured to receive a request from the client, creating a new socket fd. The number of clients is huge, and the clients can be browser clients installed on terminal devices of the agent, and the browser clients communicate with the server side based on a hypertext Transfer Protocol (HTTP). In the example shown in FIG. 2, there are 3 reactivor threads, but in other embodiments there may be other numbers.
S102, distributing the request of the client to a first reactivor thread in the reactivor group, wherein the first reactivor thread is any one of the reactivor threads in the reactivor thread group, and is based on epoll and configured to monitor the change of the client.
The main thread is used for processing an accept event, and after receiving a new connection, the new socket connection is put into an event monitoring cycle of a reactivor thread. After the request reaches the master thread, the master thread registers the request of the client to a corresponding player thread, for example, any one of the player threads in the player thread group shown in fig. 2. The reactiver threads in the reactiver thread group monitor changes of the client based on epoll, such as the dial status, the minor state, and the like of the front-end browser client. Table 1 gives a list of state changes for the front-end browser client.
TABLE 1
Because the reactiver threads work on epoll basis, that is, each reactiver thread can handle a myriad of connection requests from clients, fast response processing can be made to highly concurrent client requests on the server side. epoll can significantly improve system CPU utilization in cases where a program is only marginally active in a large number of concurrent connections, because epoll does not traverse the entire set of intercepted descriptors when processing a connection, as long as those active descriptors are traversed.
S103, the first reactivor thread analyzes the request of the client according to a preset protocol and then transmits the request to a worker process through a pipe, and the worker process is configured to process the request of the client with the time consumption smaller than a preset threshold value.
In some embodiments of the present invention, the first reactiver thread is responsible for receiving data sent from a client, parsing a protocol (for example, websocket protocol, which is suitable for http protocol) and then transmitting the parsed protocol to a worker process through a pipe, where the worker process is configured to process a client request whose time consumption is less than a predetermined threshold.
In other embodiments, the reactivor thread is responsible for receiving data sent from the client, parsing the protocol, and transmitting the data to the task worker process through the pipe, where the task worker process processes client requests that consume not less than the predetermined threshold, such as input/output requests of a database.
And S104, receiving a processing result of the worker process to the client request by the first reactivor thread, and feeding back the processing result to the client sending the request through a socket according to a preset protocol.
After the worker process or the task worker process completes processing of the client request, the processing result is transmitted to the reactivor thread through pipe, and the reactivor thread sends the processing result to the client through socket connection according to a protocol, so that the client can perform corresponding display or processing according to the processing result.
In a specific embodiment, a flow chart of epoll detection for active connections is shown in fig. 3. The flow of Epoll detecting active connections may include:
s301, starting a reactivor thread;
s302, checking parameters, and during the checking, allocating an index idx of the connect _ epfd;
s303, initializing resources;
s304, epoll _ wait (connect _ epfd + idx), starting the cyclic processing of connection, if successful, executing S305, and if failed, executing S310;
s305, events [ i ] event & EPOLLIN, if successful, executing S308, and if failed, executing S306;
s306, events [ i ], event & EPOLLOUT, if successful, executing S307, and if failed, executing S309;
S307,handle_send;
S308,handle_receive;
s309, clearing the connection, closing the socket connection, and clearing the socket connection from the connect _ epfd;
s310, destroying resources;
s311, the thread is ended.
And the client side is normally closed, and the client side sends a close () event. Epoll _ wait listens for read and write events, and when an epollin event occurs, the server receives the possibility of reading sockfd, recv the fd, that is, reads the data of fd, and recalls the Epoll _ CTL _ DEL in Epoll _ CTL to delete socket if a predetermined value (for example, 0) is returned, and close (sockfd) is used.
Epoll-based event triggering can adopt horizontal triggering and edge triggering, after an fd is ready, if the fd is not operated, the system will continue to send out ready notification until the fd is operated; when an fd is ready, the system will only issue a ready notification once. In order to ensure that the requests sent by the client can be processed and ensure high availability of the system, the embodiment of the invention adopts a horizontal triggering mode to perform epoll triggering processing.
The embodiment of the invention adopts a reflector framework based on epoll, supports a process to open a large number of socket descriptors, ensures that the IO efficiency does not linearly decrease along with the increase of the number of connections, and obviously improves the capability of processing concurrent connection tasks except for the side of the connected server.
In some embodiments of the invention, the server-side workflow is as shown in fig. 4. And receiving a first message and a first connection of a first client through a first ws work process in the plurality of ws work processes, wherein the first ws work process is any one work process in the plurality of ws work processes, and the first client is any one client in the clients. The client of the embodiment of the invention can be a browser client at the seat side, and the browser client can be used for communicating with a client, processing corresponding services and the like. The agent may use a terminal device, such as a laptop, desktop, smart tablet, capable of running a browser program, to log in to the browser through an account, using the telephony functionality provided by the browser. When logging in an account, the agent is required to input the currently logged-in account and a password corresponding to the account, and the account can be a sequence formed by letters, numbers or other characters. One side of a browser of the agent logs in the initial connection Websocket service, and the established Websocket service can keep the connection state all the time only by establishing connection once.
In the work flow diagram shown in FIG. 4, the ws work processes include 3 work processes, but in other embodiments, the ws work processes may be in other numbers. The request of the client may be various, for example, the request may be a request that the browser client wants to dial, have a rest, hang up, or the like, or an input/output operation that the browser client wants to query a database of the related information of the called client, or the like. The message 1 between the client and the ws working process is client signaling, and the data format is json format and comprises specific signaling operation. In the case of a browser client issuing a minor signaling, the message format is shown in fig. 5. The request message of the client is sent to the proxy server through the wsserver, and the proxy server interacts with other service processing systems.
The method includes the steps that a first message sent by a browser client side is sent to a proxy server, the proxy server carries out corresponding processing on the first message after receiving the first message, for example, in the scene that the first message is a dialing request, the proxy server creates a corresponding channel according to information such as a dialed called number and the like carried in the first message, and the proxy server makes a first response message aiming at the first message. The proxy server transmits the first response message to the first connection in the connection pool via a first monitoring process (e.g., monitoring process p2 shown in fig. 4) through inter-process communication. For example, fig. 5 shows response message data for a rest request issued by a client, the response message is returned to a browser client via a ws server, and the client performs corresponding processing according to the returned response data, for example, the seating state may be presented as a rest in a partial area of a client page of the browser.
The work flow of the server provided by the embodiment of the invention can comprise the following steps: the first connection between the client and the ws worker process is placed into a connection pool, which, as described above, may be any number of connections between the client and the ws worker process, the connection pool being configured to maintain a plurality of connections handled by the plurality of ws worker processes. As shown in fig. 4, the client establishes connection 1, connection 2, and connection 3 with three ws worker processes, respectively, and the connection pool may include connections fd1, fd2, and fd 3. The connection pool may also include a monitor process, e.g., monitor process p 2. The linkage can be identified by < fd1, p1>, < fd2, p2>, < fd2, p3 >. Processes for use in the same connection pool employ inter-process communication, e.g., a connection can be identified with < fd2, p2> that is configured for data transfer by process p2 over the connection fd2 channel with other processes. The transmitted data comprises response data returned by the proxy server.
FIG. 5 shows a scenario without callback data. The embodiment of the invention also comprises a processing flow of callback data at the server side. Referring to fig. 6 and 4, the proxy server detects an event (e.g., a hang-up event) of the terminal device of the service end (e.g., a called client terminal), and sends the detected event as callback data to the tcp client listening interface, and the tcp client listening interface returns the callback data to the front-end browser client through inter-process communication between the second monitoring process (e.g., the tcp monitoring process) and the first monitoring process (e.g., the monitoring process p2), so as to facilitate corresponding processing of the front-end page of the browser client. Under the scene of the on-hook of the called client terminal, the browser client updates the page to the page with the seat being on-hook, and the connection with the voice gateway is disconnected.
It should be noted that callback in the embodiment of the present invention is a callback mechanism, which is used for proxy server to asynchronously notify wsserver messages. The proxy server aprxy has an asynchronous event (for example, a client incoming call event or an active hang-up event of a called client terminal, etc.), and asynchronously notifies the wsserver through the callback. The callback of the embodiment of the invention is realized by connecting a callback address provided by the wsserver with https://192.xxxx/callback. The callback data is sent to the tcp client listening interface based on the callback address connection provided for the ws server.
In some embodiments of the present invention, the tcp client listening interface returning the callback data to the client via inter-process communication between the second monitoring process and the first monitoring process may include: the ws server receives request data of a tcp client monitoring interface, wherein the request data is associated with callback data, establishes a second monitoring process and a second connection according to the request data, and adopts < second connection, second monitoring process > to identify the second connection, wherein the second monitoring process and the first monitoring process are both arranged in the connection pool, and the second connection is not added into the connection pool; and receiving the callback data through the second monitoring process, and pushing the callback data to the client through the first monitoring process. Specifically, the tcp client listening interface receives data (e.g., client incoming call data, called client terminal on-hook data, etc.) returned by the proxy server aprxy through the callback address, requests the wsserver, and the wsserver receives the requested data, establishes a new connection < fd3, p3>, the monitoring process p3 and the monitoring process p2 are in the same connection pool, and the monitoring process p3 and the monitoring process p2 communicate with each other through processes, and pushes the data received by the p3 monitoring process to the front-end browser through the monitoring process p2, so that the client processes the related service flow. Wherein, the tcp listens for the fd3 generated by the client without adding the connection pool.
The embodiment of the invention also provides computer equipment for processing the concurrent and asynchronous tasks. As shown in fig. 7, the computer device 700 may include a memory 701 and a processor 702, the memory 701 being configured to store computer instructions and associated data (e.g., called user associated data and agent user associated data, etc.), the processor 702 being configured to execute the computer instructions to cause the computer device to perform the concurrent and asynchronous task processing described above. It should be noted that the computer device provided in the embodiment of the present invention may be a distributed computer cluster system formed by a plurality of servers. In particular embodiments, the computer device may include a proxy server, a ws server, and other business processing systems associated with the proxy server and the ws server, among others.
Embodiments of the present invention also provide a machine-readable non-volatile storage medium, on which a computer program or instructions are stored, which, when executed by a processor, implement the processing of concurrent and asynchronous tasks described above.
Embodiments of the present invention also provide a system, which may include a client and a server, where the server may be a computer system as shown in fig. 7, the client may be a browser client installed in a terminal device, and communication between the client and the server may be as described with reference to fig. 1 to fig. 6.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present invention can be implemented by combining software and a hardware platform. Based on such understanding, all or part of the technical solutions of the present invention contributing to the background may be embodied in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, a smart phone, a network device, etc.) to execute the concurrent and asynchronous task processing method according to the embodiments or some parts of the embodiments of the present invention.
It should be noted that the above-described embodiments are only a part, not all, of the embodiments of the present invention. The various embodiments described above may be combined in various ways as desired. The terms and expressions used in the specification of the present invention are used as terms of illustration only and are not intended to limit the claims of the present invention.
It will be appreciated by those skilled in the art that changes could be made to the details of the above-described embodiments without departing from the underlying principles thereof. The scope of the invention is, therefore, indicated by the appended claims, in which all terms are intended to be interpreted in their broadest reasonable sense unless otherwise indicated.
It will be apparent to those skilled in the art that any obvious modifications thereof can be made without departing from the spirit of the invention, which infringes the patent right of the invention and bears the corresponding legal responsibility.
Claims (10)
1. A method for processing concurrent and asynchronous tasks is characterized by comprising the following steps:
receiving a request sent by a client through a main reactivor thread, wherein the main reactivor thread corresponds to a reactivor thread group consisting of a plurality of reactivor threads;
distributing the request of the client to a first reactivor thread in the reactivor thread group, wherein the first reactivor thread is any one of the reactivor threads in the reactivor thread group, and is based on epoll and configured to monitor the change of the client;
the first reactivor thread analyzes the request of the client according to a preset protocol and then transmits the request to a worker process through a pipe, and the worker process is configured to process the request of the client with the time consumption smaller than a preset threshold;
and the first reactivor thread receives a processing result of the worker process to the client request, and feeds back the processing result to the client sending the request through a socket according to a preset protocol.
2. A method of concurrent and asynchronous task processing according to claim 1, further comprising:
creating a task worker process different from the worker process, the task worker process configured to process client requests that consume more than the predetermined threshold.
3. A method of concurrent and asynchronous task processing according to claim 1, further comprising:
receiving a first message and a first connection of a first client through a first ws work process in a plurality of ws work processes, wherein the first ws work process is any one work process in the plurality of ws work processes, and the first client is any one client in the clients;
sending the first message to a proxy server so that the proxy server can make a first response message aiming at the first message, and putting the first connection into a connection pool, wherein the connection pool is configured to maintain a plurality of connections processed by the plurality of ws working processes;
the proxy server transmits the first response message to a first connection in the connection pool through a first monitoring process by interprocess communication.
4. A method of concurrent and asynchronous task processing according to claim 3, further comprising:
the proxy server sends the detected event of the terminal equipment different from the client terminal to a tcp client terminal monitoring interface as callback data;
and the tcp client monitoring interface returns the callback data to the client through interprocess communication between a second monitoring process and the first monitoring process.
5. A method of concurrent and asynchronous task processing according to claim 3, wherein:
and the first connection in the connection pool is identified by < first connection, first monitoring process >, and the first monitoring process is configured to perform data transmission with other processes through a channel of the first connection.
6. A method of concurrent and asynchronous task processing according to claim 4, wherein:
the callback data is sent to the tcp client listening interface according to a callback address connection provided for the ws server.
7. A method of concurrent and asynchronous task processing according to claim 6, wherein:
the tcp client monitoring interface returns the callback data to the client through the inter-process communication between the second monitoring process and the first monitoring process, and the method comprises the following steps:
the ws server receives request data of the tcp client monitoring interface, wherein the request data is associated with the callback data, establishes a second monitoring process and a second connection according to the request data, and adopts < second connection, second monitoring process > to identify the second connection, wherein the second monitoring process and the first monitoring process are both arranged in the connection pool;
and receiving the callback data through the second monitoring process, and pushing the callback data to the client through the first monitoring process.
8. A method of concurrent and asynchronous task processing according to claim 2, wherein:
the client request comprises an input/output request of a database, and the input/output request of the database is processed by the task worker process.
9. A computer device comprising a memory and a processor, characterized in that:
the memory configured to store computer instructions;
the processor configured to execute the computer instructions to cause the computer device to perform:
receiving a request sent by a client through a main reactivor thread, wherein the main reactivor thread corresponds to a reactivor thread group consisting of a plurality of reactivor threads;
distributing the request of the client to a first reactivor thread in the reactivor thread group, wherein the first reactivor thread is any one of the reactivor threads in the reactivor thread group, and is based on epoll and configured to monitor the change of the client;
the first reactivor thread analyzes the request of the client according to a preset protocol and then transmits the request to a worker process through a pipe, and the worker process is configured to process the request of the client with the time consumption smaller than a preset threshold;
and the first reactivor thread receives a processing result of the worker process to the client request, and feeds back the processing result to the client sending the request through a socket according to a preset protocol.
10. The computer device of claim 9, wherein the processor is further configured to execute the computer instructions to cause the computer device to perform:
creating a task worker process different from the worker process, the task worker process configured to process client requests that consume more than the predetermined threshold.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911413124.1A CN111131499A (en) | 2019-12-31 | 2019-12-31 | Concurrent and asynchronous task processing method and device thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911413124.1A CN111131499A (en) | 2019-12-31 | 2019-12-31 | Concurrent and asynchronous task processing method and device thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111131499A true CN111131499A (en) | 2020-05-08 |
Family
ID=70506528
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911413124.1A Withdrawn CN111131499A (en) | 2019-12-31 | 2019-12-31 | Concurrent and asynchronous task processing method and device thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111131499A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111935101A (en) * | 2020-07-16 | 2020-11-13 | 北京首汽智行科技有限公司 | Communication protocol design method between client and server |
CN112612428A (en) * | 2020-12-31 | 2021-04-06 | 上海英方软件股份有限公司 | Method and device for improving performance of Codeigniter frame |
CN113329275A (en) * | 2021-05-27 | 2021-08-31 | 深圳市威尔健科技发展有限公司 | Instant intercom device based on CAT1 does not have distance limitation |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5758184A (en) * | 1995-04-24 | 1998-05-26 | Microsoft Corporation | System for performing asynchronous file operations requested by runnable threads by processing completion messages with different queue thread and checking for completion by runnable threads |
KR20030023371A (en) * | 2001-09-13 | 2003-03-19 | 엘지전자 주식회사 | Hybrid server model |
CN103164256A (en) * | 2011-12-08 | 2013-06-19 | 深圳市快播科技有限公司 | Processing method and system capable of achieving one machine supporting high concurrency |
CN104219284A (en) * | 2014-08-11 | 2014-12-17 | 华侨大学 | Server designing method based on semi-synchronization, semi-synchronization and pipe filter mode |
CN106201443A (en) * | 2016-07-27 | 2016-12-07 | 福建富士通信息软件有限公司 | A kind of method and system based on the Storm how concurrent written document of streaming Computational frame |
CN107479955A (en) * | 2017-08-04 | 2017-12-15 | 南京华飞数据技术有限公司 | A kind of efficient response method based on Epoll async servers |
-
2019
- 2019-12-31 CN CN201911413124.1A patent/CN111131499A/en not_active Withdrawn
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5758184A (en) * | 1995-04-24 | 1998-05-26 | Microsoft Corporation | System for performing asynchronous file operations requested by runnable threads by processing completion messages with different queue thread and checking for completion by runnable threads |
KR20030023371A (en) * | 2001-09-13 | 2003-03-19 | 엘지전자 주식회사 | Hybrid server model |
CN103164256A (en) * | 2011-12-08 | 2013-06-19 | 深圳市快播科技有限公司 | Processing method and system capable of achieving one machine supporting high concurrency |
CN104219284A (en) * | 2014-08-11 | 2014-12-17 | 华侨大学 | Server designing method based on semi-synchronization, semi-synchronization and pipe filter mode |
CN106201443A (en) * | 2016-07-27 | 2016-12-07 | 福建富士通信息软件有限公司 | A kind of method and system based on the Storm how concurrent written document of streaming Computational frame |
CN107479955A (en) * | 2017-08-04 | 2017-12-15 | 南京华飞数据技术有限公司 | A kind of efficient response method based on Epoll async servers |
Non-Patent Citations (2)
Title |
---|
XUANHUA SHI; XUAN LUO; JUNLING LIANG; PENG ZHAO; SHENG DI; BINGSHENG HE; HAI JIN: "Frog: Asynchronous Graph Processing on GPU with Hybrid Coloring Model", IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 31 December 2018 (2018-12-31) * |
朱丹青;高全胜;: "基于epoll+线程池的服务器性能增强设计技术的研究", 武汉工业学院学报, no. 03, 15 September 2013 (2013-09-15) * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111935101A (en) * | 2020-07-16 | 2020-11-13 | 北京首汽智行科技有限公司 | Communication protocol design method between client and server |
CN112612428A (en) * | 2020-12-31 | 2021-04-06 | 上海英方软件股份有限公司 | Method and device for improving performance of Codeigniter frame |
CN112612428B (en) * | 2020-12-31 | 2022-06-28 | 上海英方软件股份有限公司 | Method and device for improving performance of Codeigniter frame |
CN113329275A (en) * | 2021-05-27 | 2021-08-31 | 深圳市威尔健科技发展有限公司 | Instant intercom device based on CAT1 does not have distance limitation |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106603598B (en) | Method and device for processing service request | |
US10244066B2 (en) | Push notification delivery system | |
CN108200544B (en) | Short message issuing method and short message platform | |
CN108833950B (en) | Barrage message issuing method, server, system and storage medium | |
CN111131499A (en) | Concurrent and asynchronous task processing method and device thereof | |
US20140067911A1 (en) | Efficient Presence Distribution Mechanism for a Large Enterprise | |
US10693816B2 (en) | Communication methods and systems, electronic devices, and computer clusters | |
US10834033B2 (en) | Method and system for transferring messages between messaging systems | |
US10454795B1 (en) | Intermediate batch service for serverless computing environment metrics | |
CN107093138A (en) | Auction Ask-Bid System and its operation method based on distributed clog-free asynchronous message tupe | |
CN113422842B (en) | Distributed power utilization information data acquisition system considering network load | |
CN111200606A (en) | Deep learning model task processing method, system, server and storage medium | |
CN113014608A (en) | Flow distribution control method and device, electronic equipment and storage medium | |
CN114866528A (en) | Data communication method based on MQTT and Websocket | |
US9426114B2 (en) | Parallel message processing on diverse messaging buses | |
CN109656705A (en) | A kind of method and apparatus of data processing | |
CN107483384B (en) | Network data interaction method and device | |
CN114390239B (en) | Communication method, device, system, electronic device, and storage medium | |
EP2700023B1 (en) | Reducing latency for served applications by anticipatory preprocessing | |
CN115599571A (en) | Data processing method and device, electronic equipment and storage medium | |
CN106408793B (en) | A kind of Service Component sharing method and system suitable for ATM business | |
CN108076111B (en) | System and method for distributing data in big data platform | |
CN114553894B (en) | Data synchronization method, device, system and storage medium | |
CN112134852B (en) | Honeypot system attack behavior data asynchronous http sending method and device | |
CN115203139A (en) | Log query method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20200508 |