WO2017106619A1 - Systems and methods associated with edge computing - Google Patents
Systems and methods associated with edge computing Download PDFInfo
- Publication number
- WO2017106619A1 WO2017106619A1 PCT/US2016/067133 US2016067133W WO2017106619A1 WO 2017106619 A1 WO2017106619 A1 WO 2017106619A1 US 2016067133 W US2016067133 W US 2016067133W WO 2017106619 A1 WO2017106619 A1 WO 2017106619A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- atoms
- atom
- computing node
- node
- network
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5055—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering software capabilities, i.e. software resources associated or available to the machine
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5072—Grid computing
Definitions
- Edge Computing may extend cloud computing and services to the edge of the network, which may comprise computing nodes deployed inside access networks, mobile devices, Internet of Things (loT) end devices [e.g., sensors and actuators), and/or the like.
- Edge Computing may have the potential to provide data, computing, storage, application services, and/or other services, at the network edge. This may be similar to Cloud Computing (e.g., use of remote data centers).
- Classical cloud computing may not apply to the problems Edge Computing is designed to solve. Therefore, designs and approaches may be desirable to realize the potential of Edge Computing.
- the computing node may receive a computation request that comprises a script for execution.
- the script may be parsed to determine one or more atoms (e.g., an executable code component) referenced by the script.
- the one or more atoms may be preloaded (e.g., prior to the receiving of the computation request) at one or more nodes in the wireless network.
- the computing node may determine where each of the one or more atoms are preloaded.
- the computation request may be performed locally at the computing node.
- the compute node instead may decide to forward the computation request (e.g., even if the computing node has all atoms loaded locally), for example, if the computing node is under a heavy load and/or depending upon the location of input data (the compu tation request may further comprise input data or an input data handle).
- the computing node may send a request for the atoms that are not pre-loaded locally, fetch the atoms that are not pre-loaded locally, and perform the computation request locally.
- the computing node may determine that the computation request should be performed, at least in part, at another node.
- FIG. 1 depicts an example frame structure, which may be used in implementations described herein.
- FIG. 2 depicts an example frame structure, which may be used for objection publication.
- FIG. 3 depicts an example frame structure, which may be used for subscription and/or requests for objects.
- FIG. 4 depicts an example frame structure, which may be used for computation.
- FIG. 5 depicts an example system model for scripts and atoms inside a compute node.
- FIG. 6 depicts an example distributive computing procedure.
- FIG. 7 depicts an example compute node acting on a processing request.
- FIG. 8A depicts a diagram of an example communications system in which one or more disclosed embodiments may be implemented.
- FIG. 8B depicts a system diagram of an example wireless transmit/receive unit
- WTRU wireless communications
- FIG. 8C depicts a system diagram of an example radio access network and an example core network that may be used within the communications system illustrated in FIG. 8A.
- FIG. 8D depicts a system diagram of another example radio access network and an example core network that may be used within the communications system illustrated in FIG. 8A,
- FIG. 8E depicts a system diagram of another example radio access network and an example core network that may be used within the communications system illustrated in FIG. 8A.
- Edge Computing may be driven by several forces or needs. For example, network operators may be willing to provide additional value added services and/or better performance/quality experience to end users, by leveraging the unique characteristics of their Access Network such as proximit 7 to the end user, awareness of users' identity, and/or the like. There may also be a need to complement under-powered loT devices with computing capability at the edge of the network in order to ena ble complex operations or operations involving large amount of data and devices. Cloud computing itself may also drive the development of Edge Computing. For instance, cloud computing may lead to more integration of software development and deployment activities (e.g., as illustrated by the Development and Operations or DevOps model of development) in order to cope with increasing system complexity, for example.
- software development and deployment activities e.g., as illustrated by the Development and Operations or DevOps model of development
- This technology-enabled trend may lead to the merging of network infrastructure with the information technology (IT) world, and may reduce capital expenditure (CAPEX) and/or operating expenditure (OPEX) for the application provider.
- Edge Computing may provide a way to extend this flexibility (e.g., out of the data centers into the rest of the Internet and even end user devices), which may ultimately facilitate innovation for new classes of applications.
- Edge Computing development may include, for example, developments towards mobile network applications (or “mobile edge computing") and/or ToT-focused applications.
- Computing may be provided over a large distributed network of small-footprint devices (e.g. , those with limited computing and/or storage capability), which may be referred to as a "fog” (e.g., “fog computing”).
- Some classical cloud computing paradigms may not apply to "fog computing.”
- a user of cloud resources may provide a complete system to be executed as part of a self-contained package, e.g., a Virtual Machine (VM) or a container (such as the Linux LXC).
- VM Virtual Machine
- container such as the Linux LXC
- the computing resources of a single "server” or a highly interconnected set of processing cores may be allocated to support the needs of this package.
- sufficient computing power may not be available on a single device (e.g. , computation may be distributed and coordinated across multiple devices in the network). Coordinating and/or scaling computation across multiple devices in such an environment may be challenging due to requirements associated with state synchronization and/or messaging loads, for example.
- ICN Information-Centric Networking
- programs or components of a program
- data e.g. , content/information
- extensions that support distributed computing.
- a named object may operate on one or more other objects (e.g., including other named objects), and the named objects may be referred to "code", whereas the objects that they may operate on may be referred to as "data.”
- An ICN -based sy stem may treat code components as it treats data (e.g. , for the purpose of in-network storage).
- ICN may be extended to operate in a distributed computing environment (e.g., a fog computing environment) in various ways.
- a program e.g., every program
- the ⁇ code> component may include one or more pieces of ⁇ code> and/or embedded "sub-routines" (e.g., other ⁇ code, data> components), while other programming constructs (e.g. , persistent/global variables, state, and/or the like) may or may not be allowed.
- the functional programming approach may be substantially stateless.
- a ⁇ code, data> request may be execute once all components are available and the availability may be independently ensured (e.g., state synchronization may not be required).
- the functional programming approach may also be computation generic. For example, it may resemble a Turing Machine.
- One or more aspects of the functional programming approach may be adapted to meet the requirements of a distributed computing environment (e.g. , a fog computing environment).
- a distributed computing environment e.g. , a fog computing environment.
- a decision-making operation (e.g. , at one or more network nodes or each network node) of the functional programming approach may be executed as follows. Upon receiving a
- an ICN network element may make one or more decisions. For example, do 1 have the capability to execute ⁇ code>? If yes - should I execute? If no - where should I forward? Do I have all the components of ⁇ code>? If no - who should I ask for them? Or should I simply forward somewhere else to execute '7 Do I have ⁇ data>? If not - where can I obtain it? With respect to the last example decision, a node may decide to forward the ⁇ code, data> request to a network node (e.g., a node that has ⁇ data>) and ask that network node to execute the computation request, for example.
- a network node e.g., a node that has ⁇ data>
- a ⁇ code> may have one or more ⁇ code, data> components embedded in it.
- One or more of the ⁇ code, data> components e.g. , each of the ⁇ code, data> components
- Such layers may be deep, e.g., for a computationally interesting (and relatively complex) task.
- the amount of messaging, the processing involved in decision-making, and/or the delays may create challenges in handling network and/or computational tasks in an Edge Computing environment.
- the network may resolve the names ⁇ code> and ⁇ data>. For example, ⁇ data> may not exist (or the network may not be able to find it), ⁇ code> or the sub-components of ⁇ code> may not exist (or they may not be computed).
- the ⁇ code, data> objects may not be pushed into the network.
- an application may break up the programs it executes into components and/or provide the network with ⁇ program, data> components to execute as a piece of software.
- This alternative approach may encounter problems including, for example, finding a node capable of executing ⁇ program, data>, and/or composing and/or managing the result.
- Implementations may include providing an edge computing network with a set of processing atoms, defining a scope of an application configured to be executed on the edge computing network, distributing the set of processing atoms to one or more nodes of the edge computing network, and/or providing computation services from the one or more network nodes in response to a request received from a client subscribing to the scope of the application.
- the frame structure used for communication in the edge computing network may include one or more of the following fields: a field indicating the type of the frame; or, a field containing information (e.g., number of fields, lengths of the fields, etc.) about the frame.
- a client may subscribe to an application's scope (e.g., using implementation(s) described herein), which may encapsulate the context of the application.
- the client may publish into the scope.
- the network may be provided with a set of application specific processing atoms, which may be executable components that an application pushes into the network.
- the network may use information about these atoms to distribute them across the network to nodes that are capable of performing the processing.
- the client may request computation from the network nodes using the frame structure described herein.
- a computing scheme may be disclosed herein.
- the computing scheme may involve atoms ⁇ e.g., which may be deployed by an application provider into the edge cloud) and scripts (e.g., which may be assembled by clients and may call atoms and/or other scripts).
- An edge cloud service may provide distributive computation of scripts on the compute nodes composing the system. Operation of the compute nodes composing the system may be defined to provide edge cloud service. Operation of the compute nodes composing the system may be defined to provide how atoms may be positioned in the network and how this position (e.g. , as well as other operational information) may be propagated to other compute nodes. Operation of the compute nodes composing the system may be defined to provide how a compute node may process an incoming computation request from a client.
- One or more components may be disclosed herein.
- the one or more components may ⁇ be combined.
- the component(s) e.g., combined components
- the component(s) may retain the flexibility of the ICN-based functional programming approach while addressing the issues described herein.
- a ' " scope” may provide a way for applications to encapsulate context.
- a scope may be viewed as a folder that may comprise named-objects and/or other sub-folders. Scopes may not be folders in the standard computer usage context.
- Context information which may be captured using metadata, may be associated with scope(s) and/or object(s) (e.g. , with multiple scopes).
- a client subscribes to a scope, it may receive information about one or more (e.g., all) objects associated with the scope and the context (e.g. , metadata) of the objects.
- the network may associate forwarding information with the scope and/or perform object management (e.g.
- deciding where to store the objects on a certain basis (e.g., a per-scope basis).
- Tasks such as matching demand and availability, determining delivery paths, and/or the like, may be accomplished on a similar basis (e.g., the per-scope basis).
- the results may be associated with the whole scope (e.g., using scope spanning trees).
- a processing "atom” may be employed.
- a processing "atom” may represent an executable component that an application may push into the network (e.g., through a suitable API).
- the atom may be provided within an encapsulation (e.g. , as part of a virtual machine or container package) and/or published into a scope so that scope subscribers may become aware of the atom's availability.
- the atom may be provided with contextual information, which may allow the network to determine on which network node(s) it should be on-boarded (e.g. , readied for execution), for example.
- Components provided herein may be used together to provide a simple approach to fog computing. Approaches described herein may provide flexibility comparable to that resulting from a pure functional programming approach.
- Methods for distributive computation controlled by a network node may be provided.
- Network condition information, computation loading, computational capabilities, and location information from other network nodes and/or location information of atoms may be received.
- Requests to execute a script from a client application may be received. Requests may include input data and/or input data handle. Scripts may include code communicating with atoms and handling messages from atoms. Scripts may be parsed to determine which atoms are used/communicated with by the script.
- One or more network nodes may be selected to execute the script. Selection may be based on one or more of the location of used atoms, network condition information, computation load on each nodes, and location information of input data.
- a script may be executed locally (e.g.
- a local node collects input data (e.g. , if not present locally or in the input message) and/or some atoms (e.g. if those atoms are not already present locally).
- a script may be executed locally, including by remote invocation of atoms not present locally.
- An execution request may be forwarded towards one or more selected nodes.
- an application provider may publish 100 different atoms (e.g., microservices) in a network, inside a single scope identified using the application domain name (e.g. , myapp.exam.ple.com). A client may then subscribe to the scope myapp.example.com and request for computation of a script calling some of those atoms.
- Methods for policy-based distribution of atoms in an edge cloud may be provided.
- a new "subscribe for computation" message type e.g., SUBCOMP, which holds the OID of the atom program that the sender holds
- SUBCOMP may be sent and forwarded as a subscription. Recipients may use this message to update their internal representation of the location of atoms on remote compute nodes.
- An application client may subscribe to the application's top-level scope.
- the scope name and/or any authorization parameters may be determined by the application (e.g. , without provision by the network).
- the client may be provided with the scope content (e.g. , after the client subscribes to the scope (or to another scope thereafter)), which may include one or more of the following: one or more sub-scopes of the scope, one or more named objects (e.g., data or code) published into the scope, and/or one or more atoms available as part of the scope.
- the client may be updated with changes of the scope content (e.g. , as part of scope subscription), for example in a timely manner [e.g., based on the specific design of the system).
- the client may be configured with different capabilities for the different objects.
- the client may- subscribe to a sub-scope (e.g. , as if subscribing to a scope).
- the client may subsequently cancel the subscription.
- the client may request a named object, which may trigger a one-time delivery, for example.
- the client may subscribe to a named object, which may trigger a delivery.
- the client may be updated when the named object changes.
- the client may subsequently cancel the subscription.
- the client may not request or subscribe to atoms, which may be configured to remain in the network.
- the client may be made aware of the names and/or contextual information (e.g. , metadata) of the atoms.
- the client may publish objects (e.g., sub-scopes and/or named objects) into the scope (e.g., if the client is authorized to do so).
- the client may also publish atoms (e.g. , new atoms).
- the client may remove objects (e.g., subject to authorization) from the scope.
- FIG. 1 illustrates an example frame structure.
- the frame structure may be used to communicate with the network and/or to handle computation.
- An example frame structure e.g.,
- FIG. 1) may include one or more of the following fields.
- An FTYPE field may represent the type of the frame.
- FTYPE may be: PUB (which may indicate object publication by a client), REQ (which may indicate a request for an object by a client), SUB (which may indicate subscription by a client), CMP_REQ (which may indicate computation request by a client), or
- the frame structure may include an FI field, which may include information about the frame structure.
- the FI field may indicate one or more of which fields are present in the frame. If a particular field may have a multiplicity of more than 1, the FI field may indicate how many are present in the frame. For variable field lengths, the FI field may indicate what die lengths or where the field boundaries are.
- the frame structure may include an RID field, which may represent the name/ID given to an object resulting from the computation.
- XID field may represent the name/ID of a code object to be executed.
- An OID n may represent the name/ID of the nth data object on which the code is to act.
- An R_CX field may represent the context information/metadata associated with the RID.
- An X CX field may represent the context information/metadata associated with the XID.
- An On_ID field may represent the context information/metadata associated with the OIDn.
- An XJP!d field may represent the code payload (e.g. , actual code to be executed).
- An On Pld field may represent the nth data payload
- the frame structure may include a check string (not shown).
- the check siring may be used to verify the integrity of the frame.
- the frame described herein may be segmented for communication (e.g., when the frame is large).
- the XID and OID fields may have a O_ID value.
- the RID field may comprise a "PUB" field (or bit). If SET, the PUB field may indicate that the result should be published into the network under the name RID. If not SET, the network may return the result to the client without storing it, for example.
- the XID, OID and/or RID fields may have a structure and may include scope information for where the objects may be found.
- FIG. 2 shows an example frame structure that may be used for publication.
- An FTYPE field may indicate that the frame type is PUB.
- the frame may include the ID of the object being published, the object itself (in the payload), and/or metadata.
- the publication may be extended to publish multiple objects simultaneously.
- FIG. 3 shows an example frame structure for subscription and/or requests for objects.
- the context field may not be needed. Multiple objects may be subscribed to/requested at the same time (not shown).
- Computation may be illustrated herein. For example, computation may apply in the case of code acting on a single or no data objects, and for example ignore the context fields for simplicity purposes.
- the example may apply to both CMPJREQ and CMP_SUB operations. A difference between these operations may be that CMP_REQ may require a one-time
- CMP SUB may require updates.
- CMP SUB may require updates when an object component (e.g. , any of the object components), for example, code or data), is updated.
- FIG. 4 shows an example computation frame.
- the RID and XID fields may be required.
- the OID field may not be required.
- the client may request a computation without providing an input (e.g. , data is "integrated" into the code itself).
- the X Pld and 0_Pld fields may not be required. Further, one or more of the following may be possible. If the XID field is not set to NO__ID and XJPld is present, the Network may check for the existence of XID and may use X_Pld as a default option if no XID is found.
- the Network may check for the existence of XID. If XID cannot be found, the Network may return an error in response to the request. If XID field is set to NO ID and XJPld is present, the Network may use Xjpld for computation. If XID field is set to NO ID and XJPld is not present, the Network may declare a computation error. The same rales may apply to the OID and/or 0_Pld fields.
- the client may request a computation that uses existing code (e.g. , code that is in scope for the client to use) or provides code to compute. Computation may operate on pre-existing named data objects, or provide such input within the frame.
- the location and forwarding frameworks associated with the system may be used to efficiently obtain one or more of these components (e.g. , by properly scoping the code and data objects).
- the example CMP frame described herein may be used to implement other operations such as PUB, SUB, REQ, and/or the like.
- the X Pid filed may comprise a simple code to publish the object provided in 0_Pld and the object name.
- the XID and OID fields may be left, empty.
- the result e.g. , returned result
- the result of the publication e.g., success or error code
- the RID field may be left empty (e.g., the result may be returned instead of stored).
- CMP_SUB or CMP_REQ may be used respectively to fill in the required object name in OID (0_Pld may be left empty).
- the X__Pld field may be a simple "do nothing" instruction and the XID may be empty, A special "do nothing" XID field may be reserved. Hie RID may be empty or set to be the same as OID. The resulting object may be returned by the network.
- an application may use one or more APIs to provide the network with a set of application-specific processing atoms.
- the network may use information about these atoms, which may be provided by the APIs, to distribute them across the network, for example, to the nodes that are capable of doing the processing.
- a network node (e.g., every network node) may be capable of one or more of the following operations.
- a known (e.g., by name) computation "program” (e.g., atom or otherwise) may be requested to process a data object (e.g. , passed within the request or named). Multiple data objects may be aggregated (e.g. , appending or pre-pending) together as needed.
- the operations may be executed in sequence and conditional execution and loops may be supported.
- a network node (e.g., every network node) may be capable of executing scripts that may call other known execution objects, which may be scripts (e.g., scripts stored in the network by name) or atoms.
- the client may be constrained to provide such scripts as "code.”
- the application may constrain what is possible by providing a set of atoms, which maybe implemented in efficient ways.
- the scripting approach may allow the clients flexibility to request arbitrary computational tasks while retaining efficiency with the requests.
- the client and/or the application itself may enrich (e.g., in a continual manner) what may be done by making scripts available for others to use (e.g. , by publishing the scripts into the network).
- scripts may be limited to referencing known objects (e.g., within scopes that are accessible to the scripts), the finding and/or accessing these objects may be resolved by scoping, for example.
- a client may publish several scripts inside a scope clientapp.exampie.biz that make use of atoms inside scopes app l .example.com and
- app2.examp3e.org Another client may then request a computation for a script calling scripts of clientapp.exampie.biz as well as atoms of app3.example.com.
- Potential system complexity may arise as a result of tight coupling between services, for example. Such complexity may be reduced by setting appropriate rules to simplify the operations. For example, rules may be set to reduce circular calls. As another example, a rigid hierarchy may be enforced between services. For instance, if A imports B, then B may not import A; if C also imports A, then B may not import C either.
- the example approach described herein may provide a framework for access control ⁇ e.g., how a network may verify that a client device has the necessary rights to access the resources such as atoms or named objects).
- a framework for access control e.g., how a network may verify that a client device has the necessary rights to access the resources such as atoms or named objects.
- the client may go through an authorization process with the application that "owns" the scope (e.g. , the application, not the network, may be configured to authorize the client to the network).
- the client may be provided with one or more of the following.
- the client may be provided with a "secret'V'authorization key” (AK), which may be used by the client to authorize access to objects within the scope (the network may be provided with the same "secret'V'authorization key” (AK)),
- AK secret'V'authorization key
- the client may be provided with a way to derive authorization keys for sub-scopes (e.g., a scope authorization that implies authorization for one or more of the sub-scopes).
- the client may include the AK (or information obtained from AK using a standard cryptographic authorization) in the context/metadata field associated with the object.
- the network may use the information provided by the AK to verify access rights to the object prior to allowing the operations. If the client has access right to the objects (e.g. , RJD, XID, OID) on which it is requesting operations, authorization may be granted; or, the request may result in an authorization denied error.
- the objects e.g. , RJD, XID, OID
- Compute nodes may implement a "compute node runtime."
- a compute node runtime may invoke atoms and scripts.
- a compute node runtime may include a Hardware Abstraction Layer (e.g., enabling the runtime to support heterogeneous compute nodes), virtualization support to ran scripts and atoms in an isolated and resource constrained environment, and virtual machines or interpreters (e.g. , V8 JavaScript VM or a Python VM).
- Atoms may need to be packaged (e.g., in binary or text format) for multiple platforms.
- Atom implementation may use die actor model (e.g. , Akka or La). Atoms may be implemented as individual actors (e.g., a unit of computation that may have a private state and/or process messages sequentially).
- Scripts may (e.g., may always) be packaged as a portable executable (e.g., a Python or JavaScript script or Java bytecode) which may be run on any compute node.
- a portable executable e.g., a Python or JavaScript script or Java bytecode
- Each compute node may be running zero or more atoms (e.g., in individual threads), which may be made accessible from scripts through function calls. Examples of function calls may include
- Function calls may be translated by the runtime component into asynchronous (e.g., resp.
- the compute node runtime may translate the function call into a message to a remote atom (and may wait for a reply if this is a synchronous call).
- Messages between scripts and remote atoms may be CMP_SUB/REQ messages.
- Messages between scripts and remote atoms may use the OID of the atom.
- a code payload of the message may be a call to a local atom.
- FIG. 5 illustrates a system model and shows an exemplary structure for scripts and atoms inside a compute node. Arrows and associated text describe typical interface functions between components in pseudo-code using Python syntax.
- a compute node runtime may have both atoms loaded in memory (or available for dynamic loading).
- a compute node runtime may dynamically load the script and use the script as part of a framework that creates a Script instance and operates its message I/O until the script completes (e.g. , until self.result is set in the constructor or a message handler). Creating an instance of the script may call its init method. The script may stay in memory until its "result" attribute is set.
- the compute node rantime may publish the "result" object back to the client and may destroy or cache the script class instance.
- Atoms may be called synchronously or asynchronously. Atoms may send asynchronous messages back to the script.
- a compute node may decide to run a script.
- a compute node may decide to run a script if the compute node holds most of the atoms and if the remaining atoms are close by.
- a compute node m ay decide to ran a script based on an estim ation of the throughput to/from those atoms (e.g. , based on previous measurements), to estimate the network load of running the script locally.
- the decision to run a script on a node may take into account the position of all required atoms, as well as the estimated load on the network for communicating with remote atoms.
- a compute node may download an atom locally to further minimize a load.
- a compute node may decide to forward the request even if it has all atoms loaded locally, e.g. if the compute node is current under heavy load.
- a compute node may decide to forward the request even if it has ail atoms loaded locally, e.g. location of input data may also be a factor in taking the decision whether to execute locally.
- Compute node runtime may implement a fog cloud protocol.
- Compute node runtime may record and forward/coalesce subscription messages.
- Compute node runtime may (e.g., may also) forward publication messages towards subscribers.
- a small fog cloud network is described where a simple forwarding scheme may be sufficient.
- Techniques described herein may be used to scale a system up.
- FIG. 6 describes an example of a distributive computing procedure including atoms publication and script computation request.
- a client device may not be part of the distributed computing system. Examples of client devices include wireless transmit/receive units (WTRUs), including, for example, a smartphone, or an loT device such as a sensor or actuator.
- WTRUs wireless transmit/receive units
- a first compute node and a second compute node are provided. The first compute node and a second compute node may be collocated with switches or routers in a networked system.
- a source node for atoms may be provided. The source node may be a repository where atoms published by- application providers are stored. The source node may be distant (e.g., one or more hops away from the first and second compute node).
- FIG. 6 depicts the shortest path from the first compute node to the source node as through the second compute node.
- a controller function is provided. The controller function distributes application policy information to the first compute node and the second compute node.
- an application may be provisioned in the edge cloud.
- Atoms are made available on one or more source nodes, and policy associated with this application is provisioned on a controller.
- the policy may be described using a declarative language such as Json or XML.
- the policy may include information such as a descriptor of the atoms (e.g. , identity, resource requirement, preferred deployment density (e.g., not less than 2 hops away, not more than 5 hops away from other atoms of the same type), global minimum and maximum number of atom of this type in the system, affinity with other atoms (e.g. , collocated with atom A, within X hops of atom B, etc.), flag for synchronous and/or asynchronous invocation support, list of methods (e.g. , supported message handlers) implemented by this atom, etc.).
- a descriptor of the atoms e.g. , identity, resource requirement, preferred deployment density (e.g., not less than 2
- Compute nodes composing the system may discover their neighbors and may need to establish a persistent connection with them, depending on the underlying transport protocol used for messages.
- compute nodes subscribe (possibly implicitly) to core topics, e.g., including ''local information' 1 and "application policy " '.
- the controller function may publish a new application policy in the system, using a "Publish" message type.
- the "Publish" message type may include tlie OID of an application policy object (e.g., a hash value of the file), the policy file as object payload (QJPld), and context/metadata information (e.g., including the application ID, version, etc.).
- the application policy may include one or more of: a list of atom OIDs and related information such as size, code package format, signature, minimum, maximum, initial, preferred number of atoms that should be deployed in the network, etc inter-atom affinity, e.g., list of atoms commonly used together in a script; and a public key from the application provider, enabling verifying the atoms' program objects origin.
- All compute nodes may be subscribers of application policies. Publication may be flooded in the network and each compute node may process it. Processing may be preceded with a random delay, so that tlie decision of neighboring nodes (e.g. , that processed the application policy earlier) may be factored by compute nodes that process tlie application policy (e.g., relatively later). Policies may be reevaluated periodically, ensuring that atoms' repartition in the network evolves to match usage over time.
- the second compute node decides to host atoms #1 and #2 (e.g., because it is the first to process the application policy and sees no neighbors are hosting those atoms).
- the second compute node may subscribe for those atoms, and may receive them from the source (as depicted in this example).
- the second compute node may receive the atoms from, any other node holding them.
- a response contains the atom program and metadata (e.g. , including a signed hash of the atom program that may enable verifying that it originates from the application provider).
- the second node may load the atoms in memory. Tlie second node may "subscribe for computation" for those atoms.
- a hop count in the message (e.g. , as a subfield of FI) may be used to inform the recipients of the distance of a particular atom, which may be factored in the decision to run the script locally.
- the "subscribe for computation” message may be a new message type SUBCOMP (otherwise similar to the existing subscribe message), using the OID of the atom.
- the "subscribe for computation” message may be a regular subscribe message (also using the OLD of the atom), with a new “computation” flag set.
- a node By sending the "subscribe for computation” message, a node indicates that it holds tlie atom whose OID is given in the message, and may be willing to accept messages for this atom.
- the first compute node may apply a delay before processing the application policy.
- the length of this delay may be random.
- the length of this delay may depend on a first analysis of the application policy and of context information (such as, for example, the currently available local computing resources).
- the first compute node may already use most of its local resources to host atoms, and therefore may decide to wait for less loaded neighbors to take their decisions.
- the first compute node may receive "subscribe for computation" messages from, the second node, and may update its local record of subscriptions. Since the atoms are available nearby, the first compute node may decide not to locally host atoms for this application.
- the first and second compute nodes may publish context information including network and computing resources load (e.g. , on a periodic basis).
- the context information may be encoded in an XML or JSON document in the O Pid field of the publish message.
- Compute nodes may collect this information and maintain a live local representation of computing and network load throughout the system. Scalability considerations may be applied.
- information may be aggregated (e.g., to limit the impact of signaling on the system) .
- a client device may assemble a script (such as, for example, a JavaScript or Python script making use of atoms described herein).
- the client device may send a computation request.
- the computation request may specify the returned object's OID.
- the computation request may place the script code into the X Pid field.
- Input data may be present.
- the computation request may place input data in an 0_Pld field.
- scripts and/or input data objects may be published separately, and may be included by OID in the computation request XID (resp. OID) field.
- the first compute node may receive the message.
- the first compute node may parse the script and collect the list of atoms that the script is using.
- the result of the parsing operation e.g., the list of used atoms
- the computation request metadata may be part of the computation request forwarded to other nodes.
- FIG. 6 depicts the list of atoms used as (atom # 1, atom. #2).
- the first compute node may decide to process locally.
- the first compute node may- decide to forward the request.
- a subscription for computation for both atoms # 1 and #2 is found on the same interface.
- the first compute node may determine to forward the message over this interface.
- Decision process execution may occur on the second compute node.
- the second compute node may get the parsing result from metadata (or may parse the script (e.g., if this information was not present in metadata).
- the second compute node may determine that both needed atoms are present locally.
- the second compute node may determine that the local processing load is low enough that the second compute node decides to process locally.
- the second compute node could take the decision to forward the request (such as, for example, if the processing load was high locally and/or if the needed atoms were also available further in the network).
- the second compute node may perform the requested computation (e.g., by interpreting the script locally over a Python or JavaScript VM) , invocations of methods over atoms may be translated into synchronous or asynchronous calls to local atoms.
- Local atoms may (e.g. , in response) send messages to other atoms or send messages back to the calling script (which may implement message handlers for this purpose),
- a script may produce a computation response, which the second compute node may publish in a response message.
- the computation response may be present in O Pld section and may be associated with an OID equal to the RID provided in the computation request.
- a third atom #3 may be located on a third compute node. Hie atom #3 may not be available on the second compute node.
- the second compute node may decide to compute locally. For example, when the script invokes atom #3, the runtime component on the second compute node may send a CMP_SUB REQ message to the third compute node, including a short script which invokes the required action on atom #3. The runtime component on the second compute may wait for a response from the third compute node (e.g., if the call to atom#3 method was synchronous) and use the returned object as output of the function call in the script. The runtime component on the second compute may proceed immediately with interpreting the rest of the script (e.g., if the call to atom#3 method was asynchronous).
- the runtime component on the second compute node may send a CMP_SUB REQ message to the third compute node, including a short script which invokes the required action on atom #3.
- the runtime component on the second compute may wait for a response from the third compute node (e.g., if the call to atom#3 method was synchronous) and use
- the second compute node may decide to get atom #3 and compute locally.
- the second compute node may subscribe to atom #3, wait for it to be received and installed locally, and then proceed with executing the script.
- the second compute node may decide to forward the computation request towards a more appropriate target (e.g. , towards the third compute node) .
- FIG. 6 depicts a small network, such as, for example, a network where subscriptions for policies and computations and publication of load information may reach all nodes.
- Scalability to a larger network may be achieved using one or more of the following techniques.
- computing nodes may be grouped together in domains. Aggregated information may be sent between domains (e.g., subscriptions for computation for any atom of application
- a Time-to-Live may avoid flooding the network.
- a Time-to-Live may create a "horizon" (e.g. , a maximum number of hops) beyond which it is impossible for a compute node to have information about atoms.
- subscription information may be disseminated in an aggregated, rate-limited form.
- An aggregated, rate-limited form may ensure that all compute nodes have a view of the whole network which is recent for close by nodes and older for more distant nodes.
- subscriptions for computation and other information may be sent up (e.g. , from leaves to core) by default.
- a compute node does not know how to handle a request, it may then forward it up until it reaches a compute node aware of the location of the required atoms.
- FIG. 7 illustrates a decision process by a compute node upon reception of a processing request.
- the compute node may check if a request's metadata section (e.g. , X_CX) includes parsing results placed there by another compute node. If found, the list of called atoms is obtained from this metadata. If parsing results are not found, the compute node may obtain the script from the XJPld section of the request. If parsing results are not found, the compute node may fetch the script from the network, using X ID from the request. The compute node may parse the script to obtain a list of atoms called by the script. The compute node may place the result in a metadata section of the request (e.g., in case the request may be forwarded to other compute nodes).
- a request's metadata section e.g. , X_CX
- the compute node may enumerate and evaluate (e.g., calculate a cost for) available strategies.
- the compute node may use as input one or more of: the list of called atoms, request metadata, and context information.
- Request metadata may include "Affinity” information of the script with certain node IDs, certain local atoms, etc.
- "Affinity” may be associated with a score which may indicate that a particular setup (e.g. , "running on node with given nodeld") is mandator ⁇ ' or more or less highly preferred.
- "Affinity” information may be set by the originating client.
- "Affinity” information may be set by a compute node (e.g., a compute node's decision process algorithm may decide a strategy is preferred, and set it in the affinity section in order to reduce the processing power required to enumerate and evaluate strategies in other compute nodes).
- "Affinity" information may include one or more of: a preferred node ID (e.g. , INCENTIVE term for this node ID would be increased with a weight which may be pro ided by the client in metadata); a required node ID or set of node IDs (e.g. , cost may be infinite for all other nodes); a set of atoms which may be used locally (e.g., others may be remote); and a constraint on the compute node that may run the script (e.g. , manufacturer, type, location, latency to end user, etc.). A set of atoms which may be used locally may limit the available strategies, since all required atoms may need to be gathered if not already present.
- a preferred node ID e.g. , INCENTIVE term for this node ID would be increased with a weight which may be pro ided by the client in metadata
- a required node ID or set of node IDs e.g. , cost may be infinite for all other nodes
- Context information may include one or more of a list of subscriptions for computation, a list of processing load information, a list of network load information, and topological information.
- a list of subscriptions for computation from local and remote compute nodes may indicate where atoms are located. Elements of this list may have fields (such as, for example, nodeld, atomlds, interface, cost).
- the interface may be the egress interface that may be used to reach the node identified by nodeld.
- nodeld may be the interface the related subscriptions were received from
- atomlds may be a list of atoms that the node has subscribed for computation for.
- Cost may be a number of hops to reach the node from this interface.
- a list of processing load information may include nodeld and/or processingLoad.
- a list of network load information may include linkld and/or linkLoad.
- the list of processing load information and the list of network load information may be built (e.g., and maintained) by the compute node.
- the list of processing load information and the list of network load information may be updated by the compute node (e.g., continuously).
- the compute node may receive and send "Publish compute and network load information" messages from/to remote and local compute nodes.
- Topological information may identify the links (identified by linklds) and/or number of hops to reach remote nodes (identified by nodeids).
- the compute node may enumerate possible strategies, such as, for example, including: forward over interface X, over multiple interfaces X ⁇ Y-. , . , compute locally, gather atom(s) A-B-... and compute locally, drop, etc. '"Affinity" metadata, may influence this enumeration by ruling out possible strategies.
- the compute node may calculate a cost for a strategy.
- "Affinity" metadata may influence cost calculation of a strategy, e.g., adding a positive cost (e.g. , decreasing the odds of choosing this strategy) to "compute locally” if preferred atoms are not present locally,
- COST local_computation ay include a cost proportional to (local processing load - threshold).
- COST remote__compucatioiis m y be, for example, a number of hops to atom + the cost of computation on remote node, for all atoms not locally present (the cost for locally present atoms may be 0).
- INCENTIVE may be a negative cost applied to increase the chance to compute locally each time the request is forwarded (e.g., in order to avoid loops and reduce end-to-end latency). INCENTIVE may increase each time the request is forwarded while in the fog cloud network.
- COSTgather may be the sum of the costs of transmitting over each link from, the atom source node(s).
- COSThoid m y be the cost of holding the atom locally, which may be 0 if there is available caching space, and higher if older unused atoms should be evicted to make room.
- COSTiocat_ ⁇ raputat!on may include a cost proportional to (local processing load - threshold).
- COSTremote_comjwtations may be, for example, a number of hops to atom + the cost of computation on remote node, for all atoms not locally present (the cost for locally present atoms may be 0).
- INCENTIVE may be a negative cost applied to increase the chance to compute locally each time the request is forwarded (e.g., in order to avoid loops and reduce end-to-end latency). INCENTIVE may increase each time the request is forwarded while in the fog cloud network.
- the compute node may determine which remote compute node is most likely to perform the request processing.
- COSTforwarding may be the network cost of forwarding the request and response between the present compute node and the target compute node.
- COSTcomputation may be the cost of the best local computation strategy on that node. If forwarding to more than one interface, an additional term may be added to take into account the cost of duplicating computation. Forwarding over multiple interfaces may not be efficient. Forwarding over multiple interfaces may be used when increased reliability is critical. Forwarding over multiple interfaces may be used when multiple concurrent computations is the desired behavior. Enumerated strategies may include forwarding over a single interface.
- available information may be incomplete or outdated, especially for distant nodes.
- an atom may be known to be reachable over an interface from a distant node, but no topological information may be available to estimate the cost of reaching this node. Costs may be estimated based on local or historical values. As this process may be repeated at each hop, information may become more accurate as the request travels through the network.
- wireless network may be implemented and/or used, for example, in a wired and/or wireless network (e.g., in device(s) of such networks, such as network devices, end user devices, etc.). These devices may include a WTRU that works wirelessly or a WTRU used in a wired network (e.g., a computer or other end user device with a wired connection to the network).
- a wireless communication network 800 e.g. , as shown in FIG. 8A
- one or more disclosed embodiments may be implemented or used.
- the communications system 800 may be a multiple access system that provides content, such as voice, data, video, messaging, broadcast, etc., to multiple wireless users.
- the communications system 800 may enable multiple wireless users to access such content through the sharing of sy stem resources, including wireless bandwidth. For example, the
- communications systems 800 may employ one or more channel access methods, such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), single-earner FDMA (SC-FDMA), and the like.
- CDMA code division multiple access
- TDMA time division multiple access
- FDMA frequency division multiple access
- OFDMA orthogonal FDMA
- SC-FDMA single-earner FDMA
- the communications system 800 may include wireless transmit/receive units (WTRUs) 802a, 802b, 802c, and/or 802d (which generally or collectively may be referred to as WTRU 802), a radio access network (RAN) 803/804/805, a core network 806/807/809, a public switched telephone network (PSTN) 808, the Internet 810, and other networks 812, though it will be appreciated that the disclosed embodiments contemplate any number of WTRUs, base stations, networks, and/or network elements.
- Each of the WTRUs 802a, 802b, 802c, and/or 802d may be any type of device configured to operate and/or communicate in a wireless environment.
- the WTRUs 802a, 802b, 802c, and/or 802d may be configured to transmit and/or receive wireless signals and may include user equipment (UE), a mobile station, a fixed or mobile subscriber unit, a pager, a cellular telephone, a personal digital assistant (PDA), a smartphone, a laptop, a netbook, a personal computer, a wireless sensor, consumer electronics, and the like.
- UE user equipment
- PDA personal digital assistant
- smartphone a laptop
- netbook a personal computer
- a wireless sensor consumer electronics, and the like.
- the communications systems 800 may also include a base station 814a and a base station 814b.
- Each of the base stations 814a, 814b may be any type of device configured to wireiessly interface with at least one of the WTRUs 802a, 802b, 802c, and/or 802,d to facilitate access to one or more communication networks, such as the core network 806/807/809, the
- the base stations 814a and/or 814b may be a base transceiver station (BTS), a Node-B, an eNode B, a Home Node B, a Home eNode
- BTS base transceiver station
- eNode B eNode B
- Home Node B eNode B
- Home eNode eNode
- B a site controller, an access point (AP), a wireless router, and the like. While the base stations
- 814a, 814b may include any number of interconnected base stations and/or network elements.
- the base station 814a may be part of the RAN 803/804/805, which may also include other base stations and/or network elements (not shown), such as a base station controller (BSC), a radio network controller (RNC), relay nodes, etc.
- BSC base station controller
- RNC radio network controller
- the base station 814b may be configured to transmit and/or receive wireless signals within a particular geographic region, which may be referred to as a cell (not shown).
- the cell may further be divided into cell sectors.
- the cell associated with the base station 814a may be divided into three sectors.
- the base station 814a may include three transceivers, i.e., one for each sector of the cell.
- the base station 814a may employ multiple-input multiple output (MIMO) technology and, therefore, may utilize multiple transceivers for each sector of the cell.
- MIMO multiple-input multiple output
- the base stations 814a and/or 814b may communicate with one or more of the WTRUs 802a, 802b, 802c, and/or 802d over an air interface 815/816/817, which may be any suitable wireless communication link (e.g. , radio frequency (RF), microwave, infrared (IR), ultraviolet (UV), visible light, etc.).
- the air interface 815/816/817 may be established using any suitable radio access technology (RAT).
- RAT radio access technology
- the communications system 800 may be a multiple access system and may employ one or more channel access schemes, such as CDMA, TDMA, FDMA, OFDMA, SC-FDMA, and the like.
- the base station 814a in the RAN 803/804/805 and the WTRUs 802a, 802b, and/or 802c may implement a radio technology such as Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access (UTRA), which may establish the air interface 815/816/817 using wideband CDMA (WCDMA).
- UMTS Universal Mobile Telecommunications System
- UTRA Universal Mobile Telecommunications System
- WCDMA wideband CDMA
- WCDMA may include communication protocols such as High-Speed Packet Access (HSPA) and/or Evolved HSPA (HSPA+).
- HSPA may include High-Speed Downlink Packet Access (HSDPA) and/or High-Speed Uplink Packet Access (HSUPA).
- HSPA High-Speed Packet Access
- HSDPA High-Speed Downlink Packet Access
- HSUPA High-Speed Uplink Packet Access
- the base station 814a and the WTRUs 802a, 802b, and/or 802c may implement a radio technology such as Evolved UMTS Terrestrial Radio Access (E- UTRA), which may establish the air interface 815/816/817 using Long Term Evolution (LIE) and/or LTE- Advanced (LTE-A).
- E- UTRA Evolved UMTS Terrestrial Radio Access
- LIE Long Term Evolution
- LTE-A LTE- Advanced
- the base station 814a and the WTRUs 802a, 802b, and/or 802c may implement radio technologies such as IEEE 802.16 (i.e., Worldwide Interoperability for Microwave Access (WiMAX)), CDMA2000, CDMA2000 IX, CDMA2000 EV-DO, Interim Standard 2000 (IS-2000), Interim Standard 95 (IS-95), Interim Standard 856 (IS-856), Global System for Mobile communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), GSM EDGE (GERAN), and the like.
- IEEE 802.16 i.e., Worldwide Interoperability for Microwave Access (WiMAX)
- CDMA2000, CDMA2000 IX, CDMA2000 EV-DO Code Division Multiple Access 2000
- IS-95 Interim Standard 95
- IS-856 Interim Standard 856
- GSM Global System for Mobile communications
- GSM Global System for Mobile communications
- EDGE Enhanced Data rates for GSM Evolution
- GERAN GSM EDGE
- the base station 814b in FIG. 8A may be a wireless router, Home Node B, Home eNode B, or access point, for example, and may utilize any suitable RAT for facilitating wireless connectivity in a localized area, such as a place of business, a home, a vehicle, a campus, and the like.
- the base station 814b and the WTRUs 802c, 802d may implement a radio technology such as IEEE 802.11 to establish a wireless local area network (WLAN).
- the base station 814b and the WTRUs 802c, 802d may implement a radio technology such as IEEE 802, 15 to establish a wireless personal area network (WPAN).
- WLAN wireless local area network
- WPAN wireless personal area network
- the base station 814b and the WTRUs 802c, 802d may utilize a cellular- based RAT (e.g., WCDMA, CDMA2000, GSM, LTE, LTE-A, etc.) to establish a picocell or femtocell.
- a cellular- based RAT e.g., WCDMA, CDMA2000, GSM, LTE, LTE-A, etc.
- the base station 814b may have a direct connection to the Internet 810.
- the base station 814b may not be required to access the Internet 810 via the core network 806/807/809.
- the RAN 803/804/805 may be in communication with the core network 806/807/809, whi ch may be any type of network configured to provide voice, data, applications, and/or voice over internet protocol (VoIP) services to one or more of the WTRUs 802a, 802b, 802c, and/or 802d.
- the core network 806/807/809 may provide call control, billing services, mobile location-based se dees, pre-paid calling, Internet connectivity, video distribution, etc., and/or perform high-level security functions, such as user authentication.
- the RAN 803/804/805 and/or the core network 806/807/809 may be in direct or indirect communication with other RANs that employ the same RAT as the RAN 803/804/805 or a different RAT.
- the core network 806/807/809 may be in direct or indirect communication with other RANs that employ the same RAT as the RAN 803/804/805 or a different RAT.
- the core network 803/804/805 which may be utilizing an E-UTRA radio technology
- 806/807/809 may also be in communication with another RAN (not shown) employing a GSM radio technology.
- the core network 806/807/809 may also serve as a gateway for the WTRUs 802a, 802b, 802c, and/or 802d to access the PSTN 808, the Internet 810, and/or other networks 812.
- the PSTN 808 may include circuit-switched telephone networks that provide plain old telephone service (POTS).
- POTS plain old telephone service
- the Internet 810 may include a global system of interconnected computer networks and devices that use common communication protocols, such as the transmission control protocol (TCP), user datagram protocol (UDP) and the internet protocol (IP) in the TCP/IP internet protocol suite.
- TCP transmission control protocol
- UDP user datagram protocol
- IP internet protocol
- the networks 812 may include wired or wireless
- the networks 812 may include another core network connected to one or more RANs, which may employ the same RAT as the RAN 803/804/805 or a different RAT.
- Some or all of the WTRUs 802a, 802b, 802c, and/or 802d in the communications system 800 may include multi-mode capabilities, i .e., the WTRUs 802a, 802b, 802c, and/or 802d may include multiple transceivers for communicating with different wireless networks over different wireless links.
- the WTRU 802c shown in FIG. 8A may be configured to communicate with the base station 814a, which may employ a cellular-based radio technology, and with the base station 814b, which may employ an IEEE 802 radio technology.
- FIG. 8B depicts a system diagram of an example WTRU 802.
- the WTRU 802 may include a processor 818, a transceiver 820, a transmit/receive element 822, a speaker/microphone 824, a keypad 826, a display/touchpad 828, non-removable memory 830, removable memory 832, a power source 834, a global positioning system (GPS) chipset 836, and other peripherals 138.
- GPS global positioning system
- base stations 814a and 814b, and/or the nodes that base stations 814a and 814b may represent, such as but not limited to transceiver station (BTS), a Node-B, a site controller, an access point (AP), a home node-B, an evolved home node-B (eNodeB), a home evolved node-B (HeNB), a home evolved node-B gateway, and proxy nodes, among others, may include some or all of the elements depicted in FIG. 8B and described herein.
- BTS transceiver station
- Node-B a Node-B
- site controller such as but not limited to transceiver station (BTS), a Node-B, a site controller, an access point (AP), a home node-B, an evolved home node-B (eNodeB), a home evolved node-B (HeNB), a home evolved node-B gateway, and proxy nodes, among others, may include some or all of the elements depicted
- the processor 818 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller,
- DSP digital signal processor
- the processor 818 may perform signal coding, data processing, power control, input/output processing, and/or any other fimctionality that enables the WTRU 802 to operate in a wireless environment.
- the processor 818 may be coupled to the transceiver 820, which may be coupled to the
- FIG. 8B depicts the processor 818 and the transceiver 820 as separate components, it may be appreciated that the processor 18 and the transceiver 820 may be integrated together in an electronic package or chip.
- the transmit/receive element 822 may be configured to transmit signals to, or receive signals from, a base station (e.g., the base station 814a) over the air interface 815/816/817.
- the transmit/receive element 822 may be an antenna configured to transmit and/or receive RF signals.
- the transmit/receive element 822 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, for example.
- the transmit/receive element 822 may be configured to transmit and receive both RF and light signals. It will be appreciated that the transmit/receive element 822 may be configured to transmit and/or receive any combination of wireless signals.
- the WTRU 802 may include any number of transmit/receive elements 822. More specifically, the WTRU 802 may employ ⁇ technology. Thus, in one embodiment, the WTRU 802 may include two or more transmit/receive elements 822 (e.g. , multiple antennas) for transmitting and receiving wireless signals over the air interface 815/816/817.
- the WTRU 802 may include two or more transmit/receive elements 822 (e.g. , multiple antennas) for transmitting and receiving wireless signals over the air interface 815/816/817.
- the transceiver 820 may be configured to modulate the signals that are to be transmitted by the transmit/receive element 822 and to demodulate the signals that are received by the transmit/receive element 822.
- the WTRU 802 may have multi-mode capabilities.
- the transceiver 820 may include multiple transceivers for enabling the WTRU 802 to communicate via multiple RATs, such as UTRA and IEEE 802.1 1, for example.
- the processor 818 of the WTRU 802 may be coupled to, and may receive user input data from, the speaker/microphone 824, the keypad 826, and/or the display/touchpad 828 (e.g. , a liquid crystal display (LCD) display unit or organic light-emitting diode (OLED) display unit).
- the processor 818 may also output user data to the speaker/microphone 824, the keypad 826, and/or the display/touchpad 828.
- the processor 818 may access information from, and store data in, any type of suitable memory, such as the non-removable memory 830 and/or the removable memory 832.
- the non-removable memory 830 may include random-access memory (RAM), read-only memoiy (ROM), a hard disk, or any other type of memory storage device.
- the removable memory 832 may include a subscriber identity module (SIM) card, a memoiy stick, a secure digital (SD) memory card, and the like.
- SIM subscriber identity module
- SD secure digital
- the processor 818 may access information from, and store data in, memory that is not physically located on the WTRU 802, such as on a server or a home computer (not shown).
- the processor 818 may receive power from the power source 834, and may be configured to distribute and/or control the power to the oilier components in the WTRU 802.
- the power source 834 may be any suitable device for powering the WTRU 802.
- the power source 834 may include one or more dry cell batteries (e.g. , nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium-ion (Li-ion), etc.), solar cells, fuel cells, and the like.
- the processor 818 may also be coupled to the GPS chipset 836, which may be configured to provide location information (e.g. , longitude and latitude) regarding the current location of the WTRU 802. In addition to, or in lieu of, the information from the GPS chipset
- the WTRU 802 may receive location information over the air interface 815/816/817 from a base station (e.g. , base stations 814a, 814b) and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the base station 814a, 814b.
- the WTRU 802 may acquire location information by way of any suitable Iocation-detennination method while remaining consistent with an embodiment.
- the processor 818 may further be coupled to other peripherals 838, which may include one or more software and/or hardware modules that provide additional features, functionality and/or wired or wireless connectivity.
- the peripherals 138 may include an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth ⁇ module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, and the like.
- FM frequency modulated
- FIG. 8C depicts a system diagram of the RAN 803 and the core network 806 according to an embodiment.
- the RAN 803 may employ a UTRA radio technology to communicate with the WTRUs 802a, 802b, and/or 802c over the air interface 815.
- the RAN 803 may also be in communication with the core network 806.
- the RAN 803 may include Node-Bs 840a, 840b, and/or 840c, which may each include one or more transceivers for communicating with the WTRUs 802a, 802b, and/or 802c over the air interface 815.
- the Node-Bs 840a, 840b, and/or 840c may each be associated with a particular cell (not shown) within the RAN 803.
- the RAN 803 may also include RNCs 842a and/or 842b. It will be appreciated that the RAN 803 may include any number of Node-Bs and RNCs while remaining consistent with an embodiment.
- the Node-Bs 840a and/or 840b may be in communication with the RNC 842a. Additionally, the Node-B 840c may be in communication with the RNC842b. The Node-Bs 840a, 840b, and/or 840c may communicate with the respective RNCs 842a, 842b via an Tub interface. The RNCs 842a, 842b may be in communication with one another via an lur interface. Each of the RNCs 842a, 842b may be configured to control the respective Node- Bs 840a, 840b, and/or 840c to which it is connected. In addition, each of the RNCs 842a, 842b may be configured to carry out or support other functionality, such as outer loop power control, load control, admission control, packet scheduling, handover control, macrodiversity, security functions, data encryption, and the like.
- outer loop power control such as outer loop power control, load control, admission control, packet scheduling, handover control, macrodiversity, security functions
- the core network 806 shown in FIG. 8C may include a media gateway (MGW) 844, a mobile switching center (MSC) 846, a sening GPRS support node (SGSN) 848, and/or a gateway GPRS support node (GGSN) 880. While each of the foregoing elements are depicted as part of the core network 806, it will be appreciated that any one of these elements may be owned and/or operated by an entity other than the core network operator.
- MGW media gateway
- MSC mobile switching center
- SGSN sening GPRS support node
- GGSN gateway GPRS support node
- the RNC 842a in the RAN 803 may be connected to the MSC 846 in the core network 806 via an IuCS interface.
- the MSC 846 may be connected to the MGW 844.
- the MSC 846 and the MGW 844 may provide the WTRUs 802a, 802b, and/or 802c with access to circuit-switched networks, such as the PSTN 808, to facilitate communications between the WTRUs 802a, 802b, and/or 802c and traditional land-line communications devices.
- the RNC 842a in the RAN 803 may also be connected to the SGSN 848 in the core network 806 via an luPS interface.
- the SGSN 848 may be connected to the GGSN 880.
- the SGSN 848 and the GGSN 880 may provide the WTRUs 802a, 802b, and/or 802c with access to packet-switched networks, such as the Internet 810, to facilitate communications between and the WTRUs 802a, 802b, and/or 802c and IP -enabled devices.
- the core network 806 may also be connected to the networks 812, which may include other wired or wireless networks that are owned and/or operated by other service providers.
- FIG. 8D depicts a system diagram of the RAN 804 and the core network 807 according to an embodiment.
- the RAN 804 may employ an E-UTRA radio technology to communicate with the WTRUs 802a, 802b, and/or 802c over the air interface 816.
- the RAN 804 may also be in communication with the core network 807.
- the RAN 804 may include eNode-Bs 860a, 860b, and/or 860c, though it will be appreciated that the RAN 804 may include any number of eNode-Bs while remaining consistent with an embodiment.
- the eNode-Bs 860a, 860b, and/or 860c may each include one or more transceivers for communicating with the WTRUs 802a, 802b, and/or 802c over the air interface 816.
- the eNode-Bs 860a, 860b, and/or 860c may implement MIMO technology.
- the eNode-B 860a for example, may use multiple antennas to transmit wireless signals to, and receive wireless signals from, the WTRIJ 802a.
- Each of the eNode-Bs 860a, 860b, and/or 860c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the uplink and/or downlink, and the like. As shown in FIG. 8D, the eNode-Bs 860a, 860b, and/or 860c may communicate with one another over an X2 interface.
- the core network 807 shown in FIG. 8D may include a mobility management gateway (MME) 862, a serving gateway 864, and a packet data network (PDN) gateway 866. While each of the foregoing elements are depicted as part of the core network 807, it will be appreciated that any one of these elements may be owned and/or operated by an entity other than the core network operator.
- MME mobility management gateway
- PDN packet data network
- the MME 862 may be connected to each of the eNode-Bs 860a, 860b, and/or 860c in the RAN 804 via an SI interface and may serve as a control node.
- the MME 862 may be responsible for authenticating users of the WTRUs 802a, 802b, and/or 802c, bearer activation/deactivation, selecting a particular serving gateway during an initial attach of the WTRUs 802a, 802b, and/or 802c, and the like.
- the MME 862 may also provide a control plane function for switching between the RAN 804 and other RANs (not shown) that employ other radio technologies, such as GSM or WCDMA.
- the serving gateway 864 may be connected to each of the eNode-Bs 860a, 860b, and/or 860c in the RAN 804 via the S i interface.
- the serving gateway 864 may generally route and forward user data packets to/from the WTRUs 802a, 802b, and/or 802c.
- the serving gateway 864 may also perform oilier functions, such as anchoring user planes during inter-eNode B handovers, triggering paging when downlink data is available for the WTRUs 802a, 802b, and/or 802c, managing and storing contexts of the WTRUs 802a, 802b, and/or 802c, and the like.
- the serving gateway 864 may also be connected to the PDN gateway 866, which may provide the WTRUs 802a, 802b, and/or 802c with access to packet-switched networks, such as the Internet 10, to facilitate communications between the WTRUs 802a, 802b, and/or 802c and IP-enabled devices.
- PDN gateway 866 may provide the WTRUs 802a, 802b, and/or 802c with access to packet-switched networks, such as the Internet 10, to facilitate communications between the WTRUs 802a, 802b, and/or 802c and IP-enabled devices.
- the core network 807 may facilitate communications with other networks.
- the core network 807 may provide the WTRUs 802a, 802b, and/or 802c with access to circuit-switched networks, such as the PSTN 808, to facilitate communications between the WTRUs 802a, 802b, and/or 802c and traditional land-line communications devices.
- the core network 807 may include, or may communicate with, an IP gateway (e.g., an IP multimedia subsystem (IMS) server) that serves as an interface between the core network 807 and the PSTN 808.
- IMS IP multimedia subsystem
- the core network 807 may provide the WTRUs 802a, 802b, and/or 802c with access to the networks 812, which may include other wired or wireless networks that are owned and/or operated by other service providers.
- FIG. 8E depicts a system diagram of the RAN 805 and the core network 809 according to an embodiment.
- the RAN 805 may be an access sen/ice network (ASN) that employs IEEE 802.16 radio technology to communicate with the WTRUs 802a, 802b, and/or 802c over the air interface 817.
- ASN access sen/ice network
- the communication links between the different functional entities of the WTRUs 802a, 802b, and/or 802c, the RAN 805, and the core network 809 may be defined as reference points.
- the RAN 805 may include base stations 880a, 880b, and/or
- the RAN 805 may include any number of base stations and ASN gateways while remaining consistent with an embodiment.
- the base stations 880a, 880b, and/or 880c may each be associated with a particular cell (not shown) in the RAN 805 and may each include one or more transceivers for communicating with the WTRUs 802a, 802b, and/or 802c over the air interface 817.
- the base stations 880a, 880b, and/or 880c may implement MIMO technology.
- the base station 880a for example, may use multiple antennas to transmit wireless signals to, and receive wireless signals from, the WTRU 802a.
- the base stations 880a, 880b, and/or 880c may also provide mobility management functions, such as handoff triggering, tunnel establishment, radio resource management, traffic classification, quality of service (QoS) policy enforcement, and the like.
- the ASN gateway 882 may serve as a traffic aggregation point and may be responsible for paging, caching of subscriber profiles, routing to the core network 809, and the like.
- the air interface 817 between the WTRUs 802a, 802b, and/or 802c and the RAN 805 may be defined as an Rl reference point that implements the IEEE 802.16 specification.
- each of the WTRUs 802a, 802b, and/or 802c may establish a logical interface (not shown) with the core network 809.
- the logical interface between the WTRUs 802a, 802b, and/or 802c and the core network 809 may be defined as an R2 reference point, which may be used for authentication, authorization, IP host configuration management, and/or mobility management.
- the communication link between each of the base stations 880a, 880b, and/or 880c may be defined as an R8 reference point that includes protocols for facilitating WTRU handovers and the transfer of data between base stations.
- the communication link between the base stations 880a, 880b, and/or 880c and the ASN gateway 882 may be defined as an R6 reference point.
- the R6 reference point may include protocols for facilitating mobility management based on mobility events associated with each of the WTRUs 802a, 802b, and/or 802c.
- the RAN 805 may be connected to the core network 809.
- the communication link between the RAN 805 and the core network 809 may defined as an R3 reference point that includes protocols for facilitating data transfer and mobility management capabilities, for example.
- the core network 809 may include a mobile IP home agent (MIP-HA) 884, an authentication, authorization, accounting (AAA) server 886, and a gateway 888. While each of the foregoing elements are depicted as part of the core network 809, it will be appreciated that any one of these elements may be owned and/or operated by an entity other than the core network operator.
- MIP-HA mobile IP home agent
- AAA authentication, authorization, accounting
- the MIP-HA may be responsible for IP address management, and may enable the
- the MIP-HA 884 may provide the WTRUs 802a, 802b, and/or 802c with access to packet-switched networks, such as the Internet 810, to facilitate communications between the
- the AAA server 886 may be responsible for user authentication and for supporting user sendees.
- the gateway 888 may facilitate interworking with other networks. For example, the gateway 888 may provide the WT Us 802a, 802b, and/or 802c with access to circuit-switched networks, such as the PSTN 808, to facilitate communications between the WTRUs 802a, 802b, and/or 802c and traditional land-line communications devices. In addition, the gateway 888 may provide the WTRUs 802a, 802b, and/or 802c with access to the networks 812, which may include other wired or wireless networks that are owned and/or operated by other service providers.
- the RAN 805 may be connected to other ASNs and the core network 809 may be connected to other core networks.
- the communication link between the RAN 805 the other ASNs may be defined as an R4 reference point, which may include protocols for coordinating the mobility of the WTRUs 802a, 802b, and/or 802c between the RAN 805 and the other ASNs.
- R5 reference may include protocols for facilitating interworking between home core networks and visited core networks.
- Examples of computer-readable storage media include, but are not limited to, a read only memory (ROM), a random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs).
- ROM read only memory
- RAM random access memory
- register cache memory
- semiconductor memory devices magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs).
- a processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Information Transfer Between Computers (AREA)
Abstract
Systems, methods and instrumentalities are disclosed for a computing node in a wireless network. The computing node may receive a computation request that comprises a script for execution. The computation request may further comprise input data or an input data handle. The script may be parsed to determine one or more atoms (e.g., an executable code component) referenced by the script. The one or more atoms may be preloaded (e.g., prior to the receiving of the computation request) at one or more nodes in the wireless network. The computing node may determine where each of the one or more atoms are pre-loaded. Upon determining that all of the one or more atoms are preloaded at the computing node, the computation request may be performed locally at the computing node. The compute node instead may decide to forward the computation request (e.g., even if the computing node has all atoms loaded locally), for example, if the computing node is under a heavy load and/or depending upon the location of input data. Upon determining that at least one of the one or more atoms are not preloaded at the computing node, the computing node may send a request for the atoms that are not pre-loaded locally, fetch the atoms that are not pre-loaded locally, and perform the computation request locally. Upon determining that at least one of the one or more atoms are not preloaded at the computing node, the computing node may determine that the computation request should be performed, at least in part, at another node.
Description
SYSTEMS AND METHODS ASSOCIATED WITH EDGE COMPUTING
CROSSREFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. Provisional Patent Application No.
62/269,409 filed on December 18, 2015, the contents of which are hereby incorporated by- reference herein.
BACKGROUND
[0002] Edge Computing may extend cloud computing and services to the edge of the network, which may comprise computing nodes deployed inside access networks, mobile devices, Internet of Things (loT) end devices [e.g., sensors and actuators), and/or the like. Edge Computing may have the potential to provide data, computing, storage, application services, and/or other services, at the network edge. This may be similar to Cloud Computing (e.g., use of remote data centers). Classical cloud computing may not apply to the problems Edge Computing is designed to solve. Therefore, designs and approaches may be desirable to realize the potential of Edge Computing.
SUMMARY
[0003] Systems, methods and instrumentalities are disclosed for a computing node in a wireless network. The computing node may receive a computation request that comprises a script for execution. The script may be parsed to determine one or more atoms (e.g., an executable code component) referenced by the script. The one or more atoms may be preloaded (e.g., prior to the receiving of the computation request) at one or more nodes in the wireless network. The computing node may determine where each of the one or more atoms are preloaded.
[0004] Upon determining that all of the one or more atoms are preloaded at the computing node, the computation request may be performed locally at the computing node. The compute node instead may decide to forward the computation request (e.g., even if the computing node has all atoms loaded locally), for example, if the computing node is under a heavy load and/or depending upon the location of input data (the compu tation request may further comprise input data or an input data handle).
[0005] Upon determining that at least one of the one or more atoms are not preloaded at the
computing node, the computing node may send a request for the atoms that are not pre-loaded locally, fetch the atoms that are not pre-loaded locally, and perform the computation request locally.
[0006] Upon determining that at least one of the one or more atoms are not preloaded at the computing node, the computing node may determine that the computation request should be performed, at least in part, at another node.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] FIG. 1 depicts an example frame structure, which may be used in implementations described herein.
[0008] FIG. 2 depicts an example frame structure, which may be used for objection publication.
[0009] FIG. 3 depicts an example frame structure, which may be used for subscription and/or requests for objects.
[0010] FIG. 4 depicts an example frame structure, which may be used for computation.
[0011] FIG. 5 depicts an example system model for scripts and atoms inside a compute node.
[0012] FIG. 6 depicts an example distributive computing procedure.
[0013] FIG. 7 depicts an example compute node acting on a processing request.
[0014] FIG. 8A depicts a diagram of an example communications system in which one or more disclosed embodiments may be implemented.
[0015] FIG. 8B depicts a system diagram of an example wireless transmit/receive unit
(WTRU) that may be used within the communications system illustrated in FIG. 8A.
[0016] FIG. 8C depicts a system diagram of an example radio access network and an example core network that may be used within the communications system illustrated in FIG. 8A.
[0017] FIG. 8D depicts a system diagram of another example radio access network and an example core network that may be used within the communications system illustrated in FIG. 8A,
[0018] FIG. 8E depicts a system diagram of another example radio access network and an example core network that may be used within the communications system illustrated in FIG. 8A.
DETAILED DESCRIPTION
[0019] A detailed description of illustrative embodiments will now be described with
reference to the various Figures. Although this description provides a detailed example of possible implementations, it should be noted that the details are intended to be exemplary and in no way limit the scope of the application.
[0020] The development of Edge Computing may be driven by several forces or needs. For example, network operators may be willing to provide additional value added services and/or better performance/quality experience to end users, by leveraging the unique characteristics of their Access Network such as proximit 7 to the end user, awareness of users' identity, and/or the like. There may also be a need to complement under-powered loT devices with computing capability at the edge of the network in order to ena ble complex operations or operations involving large amount of data and devices. Cloud computing itself may also drive the development of Edge Computing. For instance, cloud computing may lead to more integration of software development and deployment activities (e.g., as illustrated by the Development and Operations or DevOps model of development) in order to cope with increasing system complexity, for example. This technology-enabled trend (e.g., enabled by technologies such as network and function virtualization) may lead to the merging of network infrastructure with the information technology (IT) world, and may reduce capital expenditure (CAPEX) and/or operating expenditure (OPEX) for the application provider. Edge Computing may provide a way to extend this flexibility (e.g., out of the data centers into the rest of the Internet and even end user devices), which may ultimately facilitate innovation for new classes of applications.
[0021] Edge Computing development may include, for example, developments towards mobile network applications (or "mobile edge computing") and/or ToT-focused applications. Computing may be provided over a large distributed network of small-footprint devices (e.g. , those with limited computing and/or storage capability), which may be referred to as a "fog" (e.g., "fog computing"). Some classical cloud computing paradigms may not apply to "fog computing." For example, in a classical paradigm, a user of cloud resources may provide a complete system to be executed as part of a self-contained package, e.g., a Virtual Machine (VM) or a container (such as the Linux LXC). The computing resources of a single "server" or a highly interconnected set of processing cores may be allocated to support the needs of this package. In a fog computing environment, sufficient computing power may not be available on a single device (e.g. , computation may be distributed and coordinated across multiple devices in the network). Coordinating and/or scaling computation across multiple devices in such an environment may be challenging due to requirements associated with state synchronization and/or messaging loads, for example.
[0022] Information-Centric Networking (ICN) may be utilized to address the challenges
described herein. With ICN, programs (or components of a program) may be treated as named objects by data (e.g. , content/information) objects and/or extensions that support distributed computing. A named object may operate on one or more other objects (e.g., including other named objects), and the named objects may be referred to "code", whereas the objects that they may operate on may be referred to as "data." An ICN -based sy stem may treat code components as it treats data (e.g. , for the purpose of in-network storage).
[0023] ICN may be extended to operate in a distributed computing environment (e.g., a fog computing environment) in various ways. For example, with a "functional programming" approach, a program (e.g., every program) may comprise a sequence of calls of the same <code, data> structure as the program, itself. The <code> component may include one or more pieces of <code> and/or embedded "sub-routines" (e.g., other <code, data> components), while other programming constructs (e.g. , persistent/global variables, state, and/or the like) may or may not be allowed. The functional programming approach may be substantially stateless. For example, a <code, data> request may be execute once all components are available and the availability may be independently ensured (e.g., state synchronization may not be required). The functional programming approach may also be computation generic. For example, it may resemble a Turing Machine.
[0024] One or more aspects of the functional programming approach may be adapted to meet the requirements of a distributed computing environment (e.g. , a fog computing environment).
For example, a decision-making operation (e.g. , at one or more network nodes or each network node) of the functional programming approach may be executed as follows. Upon receiving a
<code, data> request (e.g., a request to execute <code> on <data>), an ICN network element may make one or more decisions. For example, do 1 have the capability to execute <code>? If yes - should I execute? If no - where should I forward? Do I have all the components of <code>? If no - who should I ask for them? Or should I simply forward somewhere else to execute'7 Do I have <data>? If not - where can I obtain it? With respect to the last example decision, a node may decide to forward the <code, data> request to a network node (e.g., a node that has <data>) and ask that network node to execute the computation request, for example.
[0025] As another example, a <code> may have one or more <code, data> components embedded in it. One or more of the <code, data> components (e.g. , each of the <code, data> components) may engage in a multi-level decision making process and may have their own
<code, data> components. Such layers may be deep, e.g., for a computationally interesting (and relatively complex) task. The amount of messaging, the processing involved in decision-making, and/or the delays may create challenges in handling network and/or computational tasks in an
Edge Computing environment.
|0026] It may not be assumed that the network may resolve the names <code> and <data>. For example, <data> may not exist (or the network may not be able to find it), <code> or the sub-components of <code> may not exist (or they may not be computed). The <code, data> objects may not be pushed into the network.
[0027] The storage of the result of <code, data> operation may become demanding. While in a "pure" functional programming approach, the string <code, data> itself may become the name, such simplification may not be practical. The name may grow to be large. A computer program (e.g., one provided as a binary executable) may be designed with a goal to make it difficult to discern what the program does. Data may be encrypted for the program (e.g. , for the program only) to understand. With the functional programming approach, details of the computation may be exposed to the network. Such exposure may be a security concern.
[0028] In an example alternative approach, an application (e.g., each application) may break up the programs it executes into components and/or provide the network with <program, data> components to execute as a piece of software. This alternative approach may encounter problems including, for example, finding a node capable of executing <program, data>, and/or composing and/or managing the result.
[0029] Systems and methods associated with edge computing are provided that utilize specific frame structure and/or basic operations. Requesting computation in an edge computing environment is described. Implementations may include providing an edge computing network with a set of processing atoms, defining a scope of an application configured to be executed on the edge computing network, distributing the set of processing atoms to one or more nodes of the edge computing network, and/or providing computation services from the one or more network nodes in response to a request received from a client subscribing to the scope of the application.
The frame structure used for communication in the edge computing network may include one or more of the following fields: a field indicating the type of the frame; or, a field containing information (e.g., number of fields, lengths of the fields, etc.) about the frame.
[0030] A client may subscribe to an application's scope (e.g., using implementation(s) described herein), which may encapsulate the context of the application. Upon subscribing to the scope, the client may publish into the scope. The network may be provided with a set of application specific processing atoms, which may be executable components that an application pushes into the network. The network may use information about these atoms to distribute them across the network to nodes that are capable of performing the processing. The client may request computation from the network nodes using the frame structure described herein.
[0031] A computing scheme may be disclosed herein. The computing scheme may involve atoms {e.g., which may be deployed by an application provider into the edge cloud) and scripts (e.g., which may be assembled by clients and may call atoms and/or other scripts). An edge cloud service may provide distributive computation of scripts on the compute nodes composing the system. Operation of the compute nodes composing the system may be defined to provide edge cloud service. Operation of the compute nodes composing the system may be defined to provide how atoms may be positioned in the network and how this position (e.g. , as well as other operational information) may be propagated to other compute nodes. Operation of the compute nodes composing the system may be defined to provide how a compute node may process an incoming computation request from a client.
[0032] One or more components may be disclosed herein. The one or more components may¬ be combined. The component(s) (e.g., combined components) may retain the flexibility of the ICN-based functional programming approach while addressing the issues described herein.
[0033] The notion of '"scope" may be employed. A '"scope" may provide a way for applications to encapsulate context. For illustration purposes, a scope may be viewed as a folder that may comprise named-objects and/or other sub-folders. Scopes may not be folders in the standard computer usage context. Context information, which may be captured using metadata, may be associated with scope(s) and/or object(s) (e.g. , with multiple scopes). When a client subscribes to a scope, it may receive information about one or more (e.g., all) objects associated with the scope and the context (e.g. , metadata) of the objects. The network may associate forwarding information with the scope and/or perform object management (e.g. , deciding where to store the objects) on a certain basis (e.g., a per-scope basis). Tasks such as matching demand and availability, determining delivery paths, and/or the like, may be accomplished on a similar basis (e.g., the per-scope basis). The results may be associated with the whole scope (e.g., using scope spanning trees).
[0034] A processing "atom" may be employed. A processing "atom" may represent an executable component that an application may push into the network (e.g., through a suitable API). The atom, may be provided within an encapsulation (e.g. , as part of a virtual machine or container package) and/or published into a scope so that scope subscribers may become aware of the atom's availability. The atom may be provided with contextual information, which may allow the network to determine on which network node(s) it should be on-boarded (e.g. , readied for execution), for example.
[0035] Components provided herein may be used together to provide a simple approach to fog computing. Approaches described herein may provide flexibility comparable to that
resulting from a pure functional programming approach.
|0036] Methods for distributive computation controlled by a network node may be provided. Network condition information, computation loading, computational capabilities, and location information from other network nodes and/or location information of atoms may be received. Requests to execute a script from a client application may be received. Requests may include input data and/or input data handle. Scripts may include code communicating with atoms and handling messages from atoms. Scripts may be parsed to determine which atoms are used/communicated with by the script. One or more network nodes may be selected to execute the script. Selection may be based on one or more of the location of used atoms, network condition information, computation load on each nodes, and location information of input data. A script may be executed locally (e.g. , if a local node is selected to execute the script). This step may be preceded by a phase where the local node collects input data (e.g. , if not present locally or in the input message) and/or some atoms (e.g. if those atoms are not already present locally). A script may be executed locally, including by remote invocation of atoms not present locally. An execution request may be forwarded towards one or more selected nodes. In an example, an application provider may publish 100 different atoms (e.g., microservices) in a network, inside a single scope identified using the application domain name (e.g. , myapp.exam.ple.com). A client may then subscribe to the scope myapp.example.com and request for computation of a script calling some of those atoms.
[0037] Methods for policy-based distribution of atoms in an edge cloud may be provided. A new "subscribe for computation" message type (e.g., SUBCOMP, which holds the OID of the atom program that the sender holds) may be provided. The SUBCOMP may be sent and forwarded as a subscription. Recipients may use this message to update their internal representation of the location of atoms on remote compute nodes.
[0038] One or more of the following operations may be employed.
[0039] An application client may subscribe to the application's top-level scope. The scope name and/or any authorization parameters (e.g., those required for subscription) may be determined by the application (e.g. , without provision by the network). , The client may be provided with the scope content (e.g. , after the client subscribes to the scope (or to another scope thereafter)), which may include one or more of the following: one or more sub-scopes of the scope, one or more named objects (e.g., data or code) published into the scope, and/or one or more atoms available as part of the scope. In some examples, what is provided may be the name of the scope/object/atom and their contextual information that is intended for the client (e.g., not including the contextual information intended only for the network).
|0040] The client may be updated with changes of the scope content (e.g. , as part of scope subscription), for example in a timely manner [e.g., based on the specific design of the system). The client may be configured with different capabilities for the different objects. The client may- subscribe to a sub-scope (e.g. , as if subscribing to a scope). The client may subsequently cancel the subscription. The client may request a named object, which may trigger a one-time delivery, for example. The client may subscribe to a named object, which may trigger a delivery. The client may be updated when the named object changes. The client may subsequently cancel the subscription. The client may not request or subscribe to atoms, which may be configured to remain in the network. The client may be made aware of the names and/or contextual information (e.g. , metadata) of the atoms.
[0041] Upon subscribing to a scope, the client may publish objects (e.g., sub-scopes and/or named objects) into the scope (e.g., if the client is authorized to do so). In some cases, the client may also publish atoms (e.g. , new atoms). The client may remove objects (e.g., subject to authorization) from the scope.
|0042] FIG. 1 illustrates an example frame structure. The frame structure may be used to communicate with the network and/or to handle computation. An example frame structure (e.g.,
FIG. 1) may include one or more of the following fields. An FTYPE field may represent the type of the frame. For example, FTYPE may be: PUB (which may indicate object publication by a client), REQ (which may indicate a request for an object by a client), SUB (which may indicate subscription by a client), CMP_REQ (which may indicate computation request by a client), or
CMP_SUB (which may indicate computation subscription by a client). Other values for FTYPE are also possible. The frame structure may include an FI field, which may include information about the frame structure. For example, the FI field may indicate one or more of which fields are present in the frame. If a particular field may have a multiplicity of more than 1, the FI field may indicate how many are present in the frame. For variable field lengths, the FI field may indicate what die lengths or where the field boundaries are. The frame structure may include an RID field, which may represent the name/ID given to an object resulting from the computation. An
XID field may represent the name/ID of a code object to be executed. An OIDn may represent the name/ID of the nth data object on which the code is to act. An R_CX field may represent the context information/metadata associated with the RID. An X CX field may represent the context information/metadata associated with the XID. An On_ID field may represent the context information/metadata associated with the OIDn. An XJP!d field may represent the code payload (e.g. , actual code to be executed). An On Pld field may represent the nth data payload
(e.g., the actual data object). The frame structure may include a check string (not shown). The
check siring may be used to verify the integrity of the frame.
[0043] The frame described herein may be segmented for communication (e.g., when the frame is large). The XID and OID fields may have a O_ID value. The RID field may comprise a "PUB" field (or bit). If SET, the PUB field may indicate that the result should be published into the network under the name RID. If not SET, the network may return the result to the client without storing it, for example. The XID, OID and/or RID fields may have a structure and may include scope information for where the objects may be found.
[0044] FIG. 2 shows an example frame structure that may be used for publication. An FTYPE field may indicate that the frame type is PUB. The frame may include the ID of the object being published, the object itself (in the payload), and/or metadata. The publication may be extended to publish multiple objects simultaneously.
[0045] FIG. 3 shows an example frame structure for subscription and/or requests for objects. The context field may not be needed. Multiple objects may be subscribed to/requested at the same time (not shown).
[0046] Computation may be illustrated herein. For example, computation may apply in the case of code acting on a single or no data objects, and for example ignore the context fields for simplicity purposes. The example may apply to both CMPJREQ and CMP_SUB operations. A difference between these operations may be that CMP_REQ may require a one-time
computation, while CMP SUB may require updates. CMP SUB may require updates when an object component (e.g. , any of the object components), for example, code or data), is updated.
[0047] FIG. 4 shows an example computation frame. In the example CMP frame, the RID and XID fields may be required. The OID field may not be required. For example, the client may request a computation without providing an input (e.g. , data is "integrated" into the code itself). The X Pld and 0_Pld fields may not be required. Further, one or more of the following may be possible. If the XID field is not set to NO__ID and XJPld is present, the Network may check for the existence of XID and may use X_Pld as a default option if no XID is found. If XID field is not set to NO ID and X_Pld is not present, the Network may check for the existence of XID. If XID cannot be found, the Network may return an error in response to the request. If XID field is set to NO ID and XJPld is present, the Network may use Xjpld for computation. If XID field is set to NO ID and XJPld is not present, the Network may declare a computation error. The same rales may apply to the OID and/or 0_Pld fields.
[0048] In the above, one or more of the following may apply. The client may request a computation that uses existing code (e.g. , code that is in scope for the client to use) or provides code to compute. Computation may operate on pre-existing named data objects, or provide such
input within the frame. The location and forwarding frameworks associated with the system may be used to efficiently obtain one or more of these components (e.g. , by properly scoping the code and data objects).
[0049] The example CMP frame described herein may be used to implement other operations such as PUB, SUB, REQ, and/or the like. With respect to PUB, the X Pid filed may comprise a simple code to publish the object provided in 0_Pld and the object name. The XID and OID fields may be left, empty. The result (e.g. , returned result) may be the result of the publication (e.g., success or error code) and the RID field may be left empty (e.g., the result may be returned instead of stored). With respect to SUB or REQ, CMP_SUB or CMP_REQ may be used respectively to fill in the required object name in OID (0_Pld may be left empty). For example, the X__Pld field may be a simple "do nothing" instruction and the XID may be empty, A special "do nothing" XID field may be reserved. Hie RID may be empty or set to be the same as OID. The resulting object may be returned by the network.
[0050] As noted herein, an application may use one or more APIs to provide the network with a set of application-specific processing atoms. The network may use information about these atoms, which may be provided by the APIs, to distribute them across the network, for example, to the nodes that are capable of doing the processing.
[0051] A network node (e.g., every network node) may be capable of one or more of the following operations. A known (e.g., by name) computation "program" (e.g., atom or otherwise) may be requested to process a data object (e.g. , passed within the request or named). Multiple data objects may be aggregated (e.g. , appending or pre-pending) together as needed. The operations may be executed in sequence and conditional execution and loops may be supported. With the example approach described herein, a network node (e.g., every network node) may be capable of executing scripts that may call other known execution objects, which may be scripts (e.g., scripts stored in the network by name) or atoms. The client may be constrained to provide such scripts as "code."
[0052] One or more of the following may be obtained using the example approach described herein. The application may constrain what is possible by providing a set of atoms, which maybe implemented in efficient ways. The scripting approach may allow the clients flexibility to request arbitrary computational tasks while retaining efficiency with the requests. The client and/or the application itself may enrich (e.g., in a continual manner) what may be done by making scripts available for others to use (e.g. , by publishing the scripts into the network).
Because scripts may be limited to referencing known objects (e.g., within scopes that are accessible to the scripts), the finding and/or accessing these objects may be resolved by scoping,
for example. In an example, a client may publish several scripts inside a scope clientapp.exampie.biz that make use of atoms inside scopes app l .example.com and
app2.examp3e.org. Another client may then request a computation for a script calling scripts of clientapp.exampie.biz as well as atoms of app3.example.com.
[0053] Potential system complexity may arise as a result of tight coupling between services, for example. Such complexity may be reduced by setting appropriate rules to simplify the operations. For example, rules may be set to reduce circular calls. As another example, a rigid hierarchy may be enforced between services. For instance, if A imports B, then B may not import A; if C also imports A, then B may not import C either.
[0054] The example approach described herein may provide a framework for access control {e.g., how a network may verify that a client device has the necessary rights to access the resources such as atoms or named objects). For example, when a client subscribes to a scope, the client may go through an authorization process with the application that "owns" the scope (e.g. , the application, not the network, may be configured to authorize the client to the network). As a result of authorization to a scope, the client may be provided with one or more of the following. The client may be provided with a "secret'V'authorization key" (AK), which may be used by the client to authorize access to objects within the scope (the network may be provided with the same "secret'V'authorization key" (AK)), The client may be provided with a way to derive authorization keys for sub-scopes (e.g., a scope authorization that implies authorization for one or more of the sub-scopes).
[0055] Once the client has a scope AK, the client may include the AK (or information obtained from AK using a standard cryptographic authorization) in the context/metadata field associated with the object. The network may use the information provided by the AK to verify access rights to the object prior to allowing the operations. If the client has access right to the objects (e.g. , RJD, XID, OID) on which it is requesting operations, authorization may be granted; or, the request may result in an authorization denied error.
[0056] Solution details for scripts and atoms may be provided. Compute nodes may implement a "compute node runtime." A compute node runtime may invoke atoms and scripts. A compute node runtime may include a Hardware Abstraction Layer (e.g., enabling the runtime to support heterogeneous compute nodes), virtualization support to ran scripts and atoms in an isolated and resource constrained environment, and virtual machines or interpreters (e.g. , V8 JavaScript VM or a Python VM).
[0057] To be interpreted, or to run over a VM (e.g., a Python or Java VM), atoms may need to be packaged (e.g., in binary or text format) for multiple platforms. Atom implementation may
use die actor model (e.g. , Akka or Orleans). Atoms may be implemented as individual actors (e.g., a unit of computation that may have a private state and/or process messages sequentially).
[0058] Scripts may (e.g., may always) be packaged as a portable executable (e.g., a Python or JavaScript script or Java bytecode) which may be run on any compute node. Each compute node may be running zero or more atoms (e.g., in individual threads), which may be made accessible from scripts through function calls. Examples of function calls may include
"atom l .async.messageName(params)" (e.g., resp. '"atom 1 . sync .messageName(params)").
Function calls may be translated by the runtime component into asynchronous (e.g., resp.
synchronous) computing requests to local atoms. All atoms needed for a script may not be held on a single node. The compute node runtime may translate the function call into a message to a remote atom (and may wait for a reply if this is a synchronous call). Messages between scripts and remote atoms may be CMP_SUB/REQ messages. Messages between scripts and remote atoms may use the OID of the atom. A code payload of the message may be a call to a local atom.
|0059] FIG. 5 illustrates a system model and shows an exemplary structure for scripts and atoms inside a compute node. Arrows and associated text describe typical interface functions between components in pseudo-code using Python syntax. A compute node runtime may have both atoms loaded in memory (or available for dynamic loading). A compute node runtime may dynamically load the script and use the script as part of a framework that creates a Script instance and operates its message I/O until the script completes (e.g. , until self.result is set in the constructor or a message handler). Creating an instance of the script may call its init method. The script may stay in memory until its "result" attribute is set. If a "result" attribute is set, the compute node rantime may publish the "result" object back to the client and may destroy or cache the script class instance. Atoms may be called synchronously or asynchronously. Atoms may send asynchronous messages back to the script.
[0060] A compute node may decide to run a script. A compute node may decide to run a script if the compute node holds most of the atoms and if the remaining atoms are close by. A compute node m ay decide to ran a script based on an estim ation of the throughput to/from those atoms (e.g. , based on previous measurements), to estimate the network load of running the script locally. The decision to run a script on a node may take into account the position of all required atoms, as well as the estimated load on the network for communicating with remote atoms. A compute node may download an atom locally to further minimize a load. A compute node may decide to forward the request even if it has all atoms loaded locally, e.g. if the compute node is current under heavy load. A compute node may decide to forward the request even if it has ail
atoms loaded locally, e.g. location of input data may also be a factor in taking the decision whether to execute locally.
[0061] Compute node runtime may implement a fog cloud protocol. Compute node runtime may record and forward/coalesce subscription messages. Compute node runtime may (e.g., may also) forward publication messages towards subscribers. For simplicity of illustration, a small fog cloud network is described where a simple forwarding scheme may be sufficient.
Techniques described herein may be used to scale a system up.
|0062] FIG. 6 describes an example of a distributive computing procedure including atoms publication and script computation request. A client device may not be part of the distributed computing system. Examples of client devices include wireless transmit/receive units (WTRUs), including, for example, a smartphone, or an loT device such as a sensor or actuator. A first compute node and a second compute node are provided. The first compute node and a second compute node may be collocated with switches or routers in a networked system. A source node for atoms may be provided. The source node may be a repository where atoms published by- application providers are stored. The source node may be distant (e.g., one or more hops away from the first and second compute node). If the source node is distant, other intermediate compute nodes may be involved in the distribution of atoms. FIG. 6 depicts the shortest path from the first compute node to the source node as through the second compute node. A controller function is provided. The controller function distributes application policy information to the first compute node and the second compute node.
[0063] Initially (not depicted), an application may be provisioned in the edge cloud. Atoms are made available on one or more source nodes, and policy associated with this application is provisioned on a controller. The policy may be described using a declarative language such as Json or XML. The policy may include information such as a descriptor of the atoms (e.g. , identity, resource requirement, preferred deployment density (e.g., not less than 2 hops away, not more than 5 hops away from other atoms of the same type), global minimum and maximum number of atom of this type in the system, affinity with other atoms (e.g. , collocated with atom A, within X hops of atom B, etc.), flag for synchronous and/or asynchronous invocation support, list of methods (e.g. , supported message handlers) implemented by this atom, etc.).
[0064] Compute nodes composing the system (e.g., compute nodes connected in a partial mesh fashion) may discover their neighbors and may need to establish a persistent connection with them, depending on the underlying transport protocol used for messages. At discovery time, compute nodes subscribe (possibly implicitly) to core topics, e.g., including ''local information'1 and "application policy"'.
|0065] The controller function may publish a new application policy in the system, using a "Publish" message type. The "Publish" message type may include tlie OID of an application policy object (e.g., a hash value of the file), the policy file as object payload (QJPld), and context/metadata information (e.g., including the application ID, version, etc.). The application policy may include one or more of: a list of atom OIDs and related information such as size, code package format, signature, minimum, maximum, initial, preferred number of atoms that should be deployed in the network, etc inter-atom affinity, e.g., list of atoms commonly used together in a script; and a public key from the application provider, enabling verifying the atoms' program objects origin.
[0066] All compute nodes may be subscribers of application policies. Publication may be flooded in the network and each compute node may process it. Processing may be preceded with a random delay, so that tlie decision of neighboring nodes (e.g. , that processed the application policy earlier) may be factored by compute nodes that process tlie application policy (e.g., relatively later). Policies may be reevaluated periodically, ensuring that atoms' repartition in the network evolves to match usage over time.
[0067] As depicted in FIG. 6, the second compute node decides to host atoms #1 and #2 (e.g., because it is the first to process the application policy and sees no neighbors are hosting those atoms). The second compute node may subscribe for those atoms, and may receive them from the source (as depicted in this example). The second compute node may receive the atoms from, any other node holding them. A response contains the atom program and metadata (e.g. , including a signed hash of the atom program that may enable verifying that it originates from the application provider). The second node may load the atoms in memory. Tlie second node may "subscribe for computation" for those atoms. A hop count in the message (e.g. , as a subfield of FI) may be used to inform the recipients of the distance of a particular atom, which may be factored in the decision to run the script locally.
[0068] The "subscribe for computation" message may be a new message type SUBCOMP (otherwise similar to the existing subscribe message), using the OID of the atom. The "subscribe for computation" message may be a regular subscribe message (also using the OLD of the atom), with a new "computation" flag set. By sending the "subscribe for computation" message, a node indicates that it holds tlie atom whose OID is given in the message, and may be willing to accept messages for this atom.
[0069] The first compute node may apply a delay before processing the application policy.
The length of this delay may be random. The length of this delay may depend on a first analysis of the application policy and of context information (such as, for example, the currently available
local computing resources). For example, the first compute node may already use most of its local resources to host atoms, and therefore may decide to wait for less loaded neighbors to take their decisions. The first compute node may receive "subscribe for computation" messages from, the second node, and may update its local record of subscriptions. Since the atoms are available nearby, the first compute node may decide not to locally host atoms for this application.
[0070] The first and second compute nodes may publish context information including network and computing resources load (e.g. , on a periodic basis). The context information may be encoded in an XML or JSON document in the O Pid field of the publish message. Compute nodes may collect this information and maintain a live local representation of computing and network load throughout the system. Scalability considerations may be applied. Load
information may be aggregated (e.g., to limit the impact of signaling on the system) .
[0071] A client device (e.g., an application client running on the client device) may assemble a script (such as, for example, a JavaScript or Python script making use of atoms described herein). The client device may send a computation request. The computation request may specify the returned object's OID. The computation request may place the script code into the X Pid field. Input data may be present. The computation request may place input data in an 0_Pld field. In some examples, scripts and/or input data objects may be published separately, and may be included by OID in the computation request XID (resp. OID) field.
[0072] Decision process execution may occur on the first compute node. The first compute node may receive the message. The first compute node may parse the script and collect the list of atoms that the script is using. To avoid re-parsing the message on other nodes, the result of the parsing operation (e.g., the list of used atoms) may be placed in the computation request metadata. The computation request metadata may be part of the computation request forwarded to other nodes. FIG. 6 depicts the list of atoms used as (atom # 1, atom. #2).
[0073] The first compute node may decide to process locally. The first compute node may- decide to forward the request. As depicted, a subscription for computation for both atoms # 1 and #2 is found on the same interface. The first compute node may determine to forward the message over this interface.
[0074] Decision process execution may occur on the second compute node. Upon reception, the second compute node may get the parsing result from metadata (or may parse the script (e.g., if this information was not present in metadata). The second compute node may determine that both needed atoms are present locally. The second compute node may determine that the local processing load is low enough that the second compute node decides to process locally. The second compute node could take the decision to forward the request (such as, for example, if the
processing load was high locally and/or if the needed atoms were also available further in the network).
[0075] The second compute node may perform the requested computation (e.g., by interpreting the script locally over a Python or JavaScript VM) , invocations of methods over atoms may be translated into synchronous or asynchronous calls to local atoms. Local atoms may (e.g. , in response) send messages to other atoms or send messages back to the calling script (which may implement message handlers for this purpose), A script may produce a computation response, which the second compute node may publish in a response message. The computation response may be present in O Pld section and may be associated with an OID equal to the RID provided in the computation request.
[0076] Although not depicted in FIG. 6, a third atom #3 may be located on a third compute node. Hie atom #3 may not be available on the second compute node.
[0077] The second compute node may decide to compute locally. For example, when the script invokes atom #3, the runtime component on the second compute node may send a CMP_SUB REQ message to the third compute node, including a short script which invokes the required action on atom #3. The runtime component on the second compute may wait for a response from the third compute node (e.g., if the call to atom#3 method was synchronous) and use the returned object as output of the function call in the script. The runtime component on the second compute may proceed immediately with interpreting the rest of the script (e.g., if the call to atom#3 method was asynchronous).
[0078] The second compute node may decide to get atom #3 and compute locally. The second compute node may subscribe to atom #3, wait for it to be received and installed locally, and then proceed with executing the script.
[0079] The second compute node may decide to forward the computation request towards a more appropriate target (e.g. , towards the third compute node) .
[0080] FIG. 6 depicts a small network, such as, for example, a network where subscriptions for policies and computations and publication of load information may reach all nodes.
Scalability to a larger network may be achieved using one or more of the following techniques.
For example, computing nodes may be grouped together in domains. Aggregated information may be sent between domains (e.g., subscriptions for computation for any atom of application
A). In instances where it makes sense to have a large domain, using a Time-to-Live may avoid flooding the network. A Time-to-Live may create a "horizon" (e.g. , a maximum number of hops) beyond which it is impossible for a compute node to have information about atoms. In a large domain, subscription information may be disseminated in an aggregated, rate-limited form.
An aggregated, rate-limited form may ensure that all compute nodes have a view of the whole network which is recent for close by nodes and older for more distant nodes. In a large domain using a hierarchical topology, subscriptions for computation and other information may be sent up (e.g. , from leaves to core) by default. When a compute node does not know how to handle a request, it may then forward it up until it reaches a compute node aware of the location of the required atoms.
[0081 ] FIG. 7 illustrates a decision process by a compute node upon reception of a processing request. Upon reception of the request, the compute node may check if a request's metadata section (e.g. , X_CX) includes parsing results placed there by another compute node. If found, the list of called atoms is obtained from this metadata. If parsing results are not found, the compute node may obtain the script from the XJPld section of the request. If parsing results are not found, the compute node may fetch the script from the network, using X ID from the request. The compute node may parse the script to obtain a list of atoms called by the script. The compute node may place the result in a metadata section of the request (e.g., in case the request may be forwarded to other compute nodes).
[0082] The compute node may enumerate and evaluate (e.g., calculate a cost for) available strategies. When determining a strategy, the compute node may use as input one or more of: the list of called atoms, request metadata, and context information.
[0083] Request metadata may include "Affinity" information of the script with certain node IDs, certain local atoms, etc. "Affinity" may be associated with a score which may indicate that a particular setup (e.g. , "running on node with given nodeld") is mandator}' or more or less highly preferred. "Affinity" information may be set by the originating client. "Affinity" information may be set by a compute node (e.g., a compute node's decision process algorithm may decide a strategy is preferred, and set it in the affinity section in order to reduce the processing power required to enumerate and evaluate strategies in other compute nodes).
"Affinity" information may include one or more of: a preferred node ID (e.g. , INCENTIVE term for this node ID would be increased with a weight which may be pro ided by the client in metadata); a required node ID or set of node IDs (e.g. , cost may be infinite for all other nodes); a set of atoms which may be used locally (e.g., others may be remote); and a constraint on the compute node that may run the script (e.g. , manufacturer, type, location, latency to end user, etc.). A set of atoms which may be used locally may limit the available strategies, since all required atoms may need to be gathered if not already present. A constraint on the compute node that may run the script may limit the set of eligible compute nodes considered when enumerating the strategies.
|0084] Context information may include one or more of a list of subscriptions for computation, a list of processing load information, a list of network load information, and topological information.
[0085] A list of subscriptions for computation from local and remote compute nodes may indicate where atoms are located. Elements of this list may have fields (such as, for example, nodeld, atomlds, interface, cost). The interface may be the egress interface that may be used to reach the node identified by nodeld. nodeld may be the interface the related subscriptions were received from, atomlds may be a list of atoms that the node has subscribed for computation for. Cost may be a number of hops to reach the node from this interface.
[0086] A list of processing load information may include nodeld and/or processingLoad. A list of network load information may include linkld and/or linkLoad. The list of processing load information and the list of network load information may be built (e.g., and maintained) by the compute node. The list of processing load information and the list of network load information may be updated by the compute node (e.g., continuously). The compute node may receive and send "Publish compute and network load information" messages from/to remote and local compute nodes.
[0087] Topological information may identify the links (identified by linklds) and/or number of hops to reach remote nodes (identified by nodeids).
[0088] The compute node may enumerate possible strategies, such as, for example, including: forward over interface X, over multiple interfaces X~Y-. , . , compute locally, gather atom(s) A-B-... and compute locally, drop, etc. '"Affinity" metadata, may influence this enumeration by ruling out possible strategies.
[0089] The compute node may calculate a cost for a strategy. "Affinity" metadata may influence cost calculation of a strategy, e.g., adding a positive cost (e.g. , decreasing the odds of choosing this strategy) to "compute locally" if preferred atoms are not present locally,
- INCENTIVE.
be 0, for example if the local processing load is less than a threshold. COST local_computation ay include a cost proportional to (local processing load - threshold). COST remote__compucatioiis m y be, for example, a number of hops to atom + the cost of computation on remote node, for all atoms not locally present (the cost for locally present atoms may be 0). INCENTIVE may be a negative cost applied to increase the chance to compute locally each time the request is forwarded (e.g., in order to avoid loops and reduce end-to-end latency). INCENTIVE may increase each time the request is forwarded while in the fog cloud network.
[0091] For a "gather atom(s) A-B-... and compute locally" strategy: COST ~ COSTgather + COSTho!d + COST local j∞mputation + COST remote_coitiputations ~ INCENTIVE. COSTgather may be the sum of the costs of transmitting over each link from, the atom source node(s). COSThoid m y be the cost of holding the atom locally, which may be 0 if there is available caching space, and higher if older unused atoms should be evicted to make room. COSTiocat_∞raputat!on may include a cost proportional to (local processing load - threshold). COSTremote_comjwtations may be, for example, a number of hops to atom + the cost of computation on remote node, for all atoms not locally present (the cost for locally present atoms may be 0). INCENTIVE may be a negative cost applied to increase the chance to compute locally each time the request is forwarded (e.g., in order to avoid loops and reduce end-to-end latency). INCENTIVE may increase each time the request is forwarded while in the fog cloud network.
[0092] For forwarding strategies, the compute node may determine which remote compute node is most likely to perform the request processing. The cost of forwarding may be COST = COSTfoiwarding + COSTeompuiation. COSTforwarding may be the network cost of forwarding the request and response between the present compute node and the target compute node.
COSTcomputation may be the cost of the best local computation strategy on that node. If forwarding to more than one interface, an additional term may be added to take into account the cost of duplicating computation. Forwarding over multiple interfaces may not be efficient. Forwarding over multiple interfaces may be used when increased reliability is critical. Forwarding over multiple interfaces may be used when multiple concurrent computations is the desired behavior. Enumerated strategies may include forwarding over a single interface.
[0093] In a larger network, available information may be incomplete or outdated, especially for distant nodes. For example, an atom may be known to be reachable over an interface from a distant node, but no topological information may be available to estimate the cost of reaching this node. Costs may be estimated based on local or historical values. As this process may be repeated at each hop, information may become more accurate as the request travels through the network.
[0094] Although in the below examples wireless network details are provided, the embodiments described herein may be implemented and/or used, for example, in a wired and/or wireless network (e.g., in device(s) of such networks, such as network devices, end user devices, etc.). These devices may include a WTRU that works wirelessly or a WTRU used in a wired network (e.g., a computer or other end user device with a wired connection to the network). For illustration purposes, the disclosure below describes an example wireless communication network 800 (e.g. , as shown in FIG. 8A) in which one or more disclosed embodiments may be
implemented or used.
|009S] The communications system 800 may be a multiple access system that provides content, such as voice, data, video, messaging, broadcast, etc., to multiple wireless users. The communications system 800 may enable multiple wireless users to access such content through the sharing of sy stem resources, including wireless bandwidth. For example, the
communications systems 800 may employ one or more channel access methods, such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), single-earner FDMA (SC-FDMA), and the like.
[0096] As shown in FIG. 8A, the communications system 800 may include wireless transmit/receive units (WTRUs) 802a, 802b, 802c, and/or 802d (which generally or collectively may be referred to as WTRU 802), a radio access network (RAN) 803/804/805, a core network 806/807/809, a public switched telephone network (PSTN) 808, the Internet 810, and other networks 812, though it will be appreciated that the disclosed embodiments contemplate any number of WTRUs, base stations, networks, and/or network elements. Each of the WTRUs 802a, 802b, 802c, and/or 802d may be any type of device configured to operate and/or communicate in a wireless environment. By way of example, the WTRUs 802a, 802b, 802c, and/or 802d may be configured to transmit and/or receive wireless signals and may include user equipment (UE), a mobile station, a fixed or mobile subscriber unit, a pager, a cellular telephone, a personal digital assistant (PDA), a smartphone, a laptop, a netbook, a personal computer, a wireless sensor, consumer electronics, and the like.
[0097] The communications systems 800 may also include a base station 814a and a base station 814b. Each of the base stations 814a, 814b may be any type of device configured to wireiessly interface with at least one of the WTRUs 802a, 802b, 802c, and/or 802,d to facilitate access to one or more communication networks, such as the core network 806/807/809, the
Internet 810, and/or the networks 812. By w ay of example, the base stations 814a and/or 814b may be a base transceiver station (BTS), a Node-B, an eNode B, a Home Node B, a Home eNode
B, a site controller, an access point (AP), a wireless router, and the like. While the base stations
814a, 814b are each depicted as a single element, it will be appreciated that the base stations
814a, 814b may include any number of interconnected base stations and/or network elements.
[0098] The base station 814a may be part of the RAN 803/804/805, which may also include other base stations and/or network elements (not shown), such as a base station controller (BSC), a radio network controller (RNC), relay nodes, etc. The base station 814a and/or the base station
814b may be configured to transmit and/or receive wireless signals within a particular
geographic region, which may be referred to as a cell (not shown). The cell may further be divided into cell sectors. For example, the cell associated with the base station 814a may be divided into three sectors. Thus, in one embodiment, the base station 814a may include three transceivers, i.e., one for each sector of the cell. In another embodiment, the base station 814a may employ multiple-input multiple output (MIMO) technology and, therefore, may utilize multiple transceivers for each sector of the cell.
[0099] The base stations 814a and/or 814b may communicate with one or more of the WTRUs 802a, 802b, 802c, and/or 802d over an air interface 815/816/817, which may be any suitable wireless communication link (e.g. , radio frequency (RF), microwave, infrared (IR), ultraviolet (UV), visible light, etc.). The air interface 815/816/817 may be established using any suitable radio access technology (RAT).
[00100] More specifically, as noted above, the communications system 800 may be a multiple access system and may employ one or more channel access schemes, such as CDMA, TDMA, FDMA, OFDMA, SC-FDMA, and the like. For example, the base station 814a in the RAN 803/804/805 and the WTRUs 802a, 802b, and/or 802c may implement a radio technology such as Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access (UTRA), which may establish the air interface 815/816/817 using wideband CDMA (WCDMA).
WCDMA may include communication protocols such as High-Speed Packet Access (HSPA) and/or Evolved HSPA (HSPA+). HSPA may include High-Speed Downlink Packet Access (HSDPA) and/or High-Speed Uplink Packet Access (HSUPA).
[00101] In another embodiment, the base station 814a and the WTRUs 802a, 802b, and/or 802c may implement a radio technology such as Evolved UMTS Terrestrial Radio Access (E- UTRA), which may establish the air interface 815/816/817 using Long Term Evolution (LIE) and/or LTE- Advanced (LTE-A).
[00102] In other embodiments, the base station 814a and the WTRUs 802a, 802b, and/or 802c may implement radio technologies such as IEEE 802.16 (i.e., Worldwide Interoperability for Microwave Access (WiMAX)), CDMA2000, CDMA2000 IX, CDMA2000 EV-DO, Interim Standard 2000 (IS-2000), Interim Standard 95 (IS-95), Interim Standard 856 (IS-856), Global System for Mobile communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), GSM EDGE (GERAN), and the like.
[00103] The base station 814b in FIG. 8A may be a wireless router, Home Node B, Home eNode B, or access point, for example, and may utilize any suitable RAT for facilitating wireless connectivity in a localized area, such as a place of business, a home, a vehicle, a campus, and the like. In one embodiment, the base station 814b and the WTRUs 802c, 802d may implement a
radio technology such as IEEE 802.11 to establish a wireless local area network (WLAN). In another embodiment, the base station 814b and the WTRUs 802c, 802d may implement a radio technology such as IEEE 802, 15 to establish a wireless personal area network (WPAN). In yet another embodiment, the base station 814b and the WTRUs 802c, 802d may utilize a cellular- based RAT (e.g., WCDMA, CDMA2000, GSM, LTE, LTE-A, etc.) to establish a picocell or femtocell. As shown in FIG. 8 A, the base station 814b may have a direct connection to the Internet 810. Thus, the base station 814b may not be required to access the Internet 810 via the core network 806/807/809.
1001041 The RAN 803/804/805 may be in communication with the core network 806/807/809, whi ch may be any type of network configured to provide voice, data, applications, and/or voice over internet protocol (VoIP) services to one or more of the WTRUs 802a, 802b, 802c, and/or 802d. For example, the core network 806/807/809 may provide call control, billing services, mobile location-based se dees, pre-paid calling, Internet connectivity, video distribution, etc., and/or perform high-level security functions, such as user authentication. Although not shown in FIG. 8A, it will be appreciated that the RAN 803/804/805 and/or the core network 806/807/809 may be in direct or indirect communication with other RANs that employ the same RAT as the RAN 803/804/805 or a different RAT. For example, in addition to being connected to the RAN 803/804/805, which may be utilizing an E-UTRA radio technology, the core network
806/807/809 may also be in communication with another RAN (not shown) employing a GSM radio technology.
[00105] The core network 806/807/809 may also serve as a gateway for the WTRUs 802a, 802b, 802c, and/or 802d to access the PSTN 808, the Internet 810, and/or other networks 812. The PSTN 808 may include circuit-switched telephone networks that provide plain old telephone service (POTS). The Internet 810 may include a global system of interconnected computer networks and devices that use common communication protocols, such as the transmission control protocol (TCP), user datagram protocol (UDP) and the internet protocol (IP) in the TCP/IP internet protocol suite. The networks 812 may include wired or wireless
communications networks owned and/or operated by other service providers. For example, the networks 812 may include another core network connected to one or more RANs, which may employ the same RAT as the RAN 803/804/805 or a different RAT.
[00106] Some or all of the WTRUs 802a, 802b, 802c, and/or 802d in the communications system 800 may include multi-mode capabilities, i .e., the WTRUs 802a, 802b, 802c, and/or 802d may include multiple transceivers for communicating with different wireless networks over different wireless links. For example, the WTRU 802c shown in FIG. 8A may be configured to
communicate with the base station 814a, which may employ a cellular-based radio technology, and with the base station 814b, which may employ an IEEE 802 radio technology.
[00107] FIG. 8B depicts a system diagram of an example WTRU 802. As shown in FIG. 8B, the WTRU 802 may include a processor 818, a transceiver 820, a transmit/receive element 822, a speaker/microphone 824, a keypad 826, a display/touchpad 828, non-removable memory 830, removable memory 832, a power source 834, a global positioning system (GPS) chipset 836, and other peripherals 138. It will be appreciated that the WTRU 802 may include any subcombination of the foregoing elements while remaining consistent with an embodiment. Also, embodiments contemplate that the base stations 814a and 814b, and/or the nodes that base stations 814a and 814b may represent, such as but not limited to transceiver station (BTS), a Node-B, a site controller, an access point (AP), a home node-B, an evolved home node-B (eNodeB), a home evolved node-B (HeNB), a home evolved node-B gateway, and proxy nodes, among others, may include some or all of the elements depicted in FIG. 8B and described herein.
[00108] The processor 818 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller,
Application Specific Integrated Circuits (ASICs), Field Programmable Gate Array (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like. The processor 818 may perform signal coding, data processing, power control, input/output processing, and/or any other fimctionality that enables the WTRU 802 to operate in a wireless environment. The processor 818 may be coupled to the transceiver 820, which may be coupled to the
transmit/receive element 822. While FIG. 8B depicts the processor 818 and the transceiver 820 as separate components, it may be appreciated that the processor 18 and the transceiver 820 may be integrated together in an electronic package or chip.
[00109] The transmit/receive element 822 may be configured to transmit signals to, or receive signals from, a base station (e.g., the base station 814a) over the air interface 815/816/817. For example, in one embodiment, the transmit/receive element 822 may be an antenna configured to transmit and/or receive RF signals. In another embodiment, the transmit/receive element 822 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, for example. In yet another embodiment, the transmit/receive element 822 may be configured to transmit and receive both RF and light signals. It will be appreciated that the transmit/receive element 822 may be configured to transmit and/or receive any combination of wireless signals.
[00110] In addition, although the transmit/receive element 822 is depicted in FIG. 8B as a single element, the WTRU 802 may include any number of transmit/receive elements 822. More
specifically, the WTRU 802 may employ ΜΊΜΟ technology. Thus, in one embodiment, the WTRU 802 may include two or more transmit/receive elements 822 (e.g. , multiple antennas) for transmitting and receiving wireless signals over the air interface 815/816/817.
[00111] The transceiver 820 may be configured to modulate the signals that are to be transmitted by the transmit/receive element 822 and to demodulate the signals that are received by the transmit/receive element 822. As noted above, the WTRU 802 may have multi-mode capabilities. Thus, the transceiver 820 may include multiple transceivers for enabling the WTRU 802 to communicate via multiple RATs, such as UTRA and IEEE 802.1 1, for example.
[00112] The processor 818 of the WTRU 802 may be coupled to, and may receive user input data from, the speaker/microphone 824, the keypad 826, and/or the display/touchpad 828 (e.g. , a liquid crystal display (LCD) display unit or organic light-emitting diode (OLED) display unit). The processor 818 may also output user data to the speaker/microphone 824, the keypad 826, and/or the display/touchpad 828. In addition, the processor 818 may access information from, and store data in, any type of suitable memory, such as the non-removable memory 830 and/or the removable memory 832. The non-removable memory 830 may include random-access memory (RAM), read-only memoiy (ROM), a hard disk, or any other type of memory storage device. The removable memory 832 may include a subscriber identity module (SIM) card, a memoiy stick, a secure digital (SD) memory card, and the like. In other embodiments, the processor 818 may access information from, and store data in, memory that is not physically located on the WTRU 802, such as on a server or a home computer (not shown).
[00113] The processor 818 may receive power from the power source 834, and may be configured to distribute and/or control the power to the oilier components in the WTRU 802. The power source 834 may be any suitable device for powering the WTRU 802. For example, the power source 834 may include one or more dry cell batteries (e.g. , nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium-ion (Li-ion), etc.), solar cells, fuel cells, and the like.
[00114] The processor 818 may also be coupled to the GPS chipset 836, which may be configured to provide location information (e.g. , longitude and latitude) regarding the current location of the WTRU 802. In addition to, or in lieu of, the information from the GPS chipset
836, the WTRU 802 may receive location information over the air interface 815/816/817 from a base station (e.g. , base stations 814a, 814b) and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the
WTRU 802 may acquire location information by way of any suitable Iocation-detennination method while remaining consistent with an embodiment.
[00115] The processor 818 may further be coupled to other peripherals 838, which may include one or more software and/or hardware modules that provide additional features, functionality and/or wired or wireless connectivity. For example, the peripherals 138 may include an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth© module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, and the like.
[00116] FIG. 8C depicts a system diagram of the RAN 803 and the core network 806 according to an embodiment. As noted above, the RAN 803 may employ a UTRA radio technology to communicate with the WTRUs 802a, 802b, and/or 802c over the air interface 815. The RAN 803 may also be in communication with the core network 806. As shown in FIG. 8C, the RAN 803 may include Node-Bs 840a, 840b, and/or 840c, which may each include one or more transceivers for communicating with the WTRUs 802a, 802b, and/or 802c over the air interface 815. The Node-Bs 840a, 840b, and/or 840c may each be associated with a particular cell (not shown) within the RAN 803. The RAN 803 may also include RNCs 842a and/or 842b. It will be appreciated that the RAN 803 may include any number of Node-Bs and RNCs while remaining consistent with an embodiment.
[00117] As shown in FIG. 8C, the Node-Bs 840a and/or 840b may be in communication with the RNC 842a. Additionally, the Node-B 840c may be in communication with the RNC842b. The Node-Bs 840a, 840b, and/or 840c may communicate with the respective RNCs 842a, 842b via an Tub interface. The RNCs 842a, 842b may be in communication with one another via an lur interface. Each of the RNCs 842a, 842b may be configured to control the respective Node- Bs 840a, 840b, and/or 840c to which it is connected. In addition, each of the RNCs 842a, 842b may be configured to carry out or support other functionality, such as outer loop power control, load control, admission control, packet scheduling, handover control, macrodiversity, security functions, data encryption, and the like.
[00118] The core network 806 shown in FIG. 8C may include a media gateway (MGW) 844, a mobile switching center (MSC) 846, a sening GPRS support node (SGSN) 848, and/or a gateway GPRS support node (GGSN) 880. While each of the foregoing elements are depicted as part of the core network 806, it will be appreciated that any one of these elements may be owned and/or operated by an entity other than the core network operator.
[00119] The RNC 842a in the RAN 803 may be connected to the MSC 846 in the core network 806 via an IuCS interface. The MSC 846 may be connected to the MGW 844. The MSC 846 and the MGW 844 may provide the WTRUs 802a, 802b, and/or 802c with access to
circuit-switched networks, such as the PSTN 808, to facilitate communications between the WTRUs 802a, 802b, and/or 802c and traditional land-line communications devices.
[00120] The RNC 842a in the RAN 803 may also be connected to the SGSN 848 in the core network 806 via an luPS interface. The SGSN 848 may be connected to the GGSN 880. The SGSN 848 and the GGSN 880 may provide the WTRUs 802a, 802b, and/or 802c with access to packet-switched networks, such as the Internet 810, to facilitate communications between and the WTRUs 802a, 802b, and/or 802c and IP -enabled devices.
[001211 As noted above, the core network 806 may also be connected to the networks 812, which may include other wired or wireless networks that are owned and/or operated by other service providers.
[00122] FIG. 8D depicts a system diagram of the RAN 804 and the core network 807 according to an embodiment. As noted above, the RAN 804 may employ an E-UTRA radio technology to communicate with the WTRUs 802a, 802b, and/or 802c over the air interface 816. The RAN 804 may also be in communication with the core network 807.
[00123] The RAN 804 may include eNode-Bs 860a, 860b, and/or 860c, though it will be appreciated that the RAN 804 may include any number of eNode-Bs while remaining consistent with an embodiment. The eNode-Bs 860a, 860b, and/or 860c may each include one or more transceivers for communicating with the WTRUs 802a, 802b, and/or 802c over the air interface 816. In one embodiment, the eNode-Bs 860a, 860b, and/or 860c may implement MIMO technology. Thus, the eNode-B 860a, for example, may use multiple antennas to transmit wireless signals to, and receive wireless signals from, the WTRIJ 802a.
[00124] Each of the eNode-Bs 860a, 860b, and/or 860c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the uplink and/or downlink, and the like. As shown in FIG. 8D, the eNode-Bs 860a, 860b, and/or 860c may communicate with one another over an X2 interface.
[00125] The core network 807 shown in FIG. 8D may include a mobility management gateway (MME) 862, a serving gateway 864, and a packet data network (PDN) gateway 866. While each of the foregoing elements are depicted as part of the core network 807, it will be appreciated that any one of these elements may be owned and/or operated by an entity other than the core network operator.
[00126] The MME 862 may be connected to each of the eNode-Bs 860a, 860b, and/or 860c in the RAN 804 via an SI interface and may serve as a control node. For example, the MME 862 may be responsible for authenticating users of the WTRUs 802a, 802b, and/or 802c, bearer
activation/deactivation, selecting a particular serving gateway during an initial attach of the WTRUs 802a, 802b, and/or 802c, and the like. The MME 862 may also provide a control plane function for switching between the RAN 804 and other RANs (not shown) that employ other radio technologies, such as GSM or WCDMA.
[00127] The serving gateway 864 may be connected to each of the eNode-Bs 860a, 860b, and/or 860c in the RAN 804 via the S i interface. The serving gateway 864 may generally route and forward user data packets to/from the WTRUs 802a, 802b, and/or 802c. The serving gateway 864 may also perform oilier functions, such as anchoring user planes during inter-eNode B handovers, triggering paging when downlink data is available for the WTRUs 802a, 802b, and/or 802c, managing and storing contexts of the WTRUs 802a, 802b, and/or 802c, and the like.
[00128] The serving gateway 864 may also be connected to the PDN gateway 866, which may provide the WTRUs 802a, 802b, and/or 802c with access to packet-switched networks, such as the Internet 10, to facilitate communications between the WTRUs 802a, 802b, and/or 802c and IP-enabled devices.
[00129] The core network 807 may facilitate communications with other networks. For example, the core network 807 may provide the WTRUs 802a, 802b, and/or 802c with access to circuit-switched networks, such as the PSTN 808, to facilitate communications between the WTRUs 802a, 802b, and/or 802c and traditional land-line communications devices. For example, the core network 807 may include, or may communicate with, an IP gateway (e.g., an IP multimedia subsystem (IMS) server) that serves as an interface between the core network 807 and the PSTN 808. In addition, the core network 807 may provide the WTRUs 802a, 802b, and/or 802c with access to the networks 812, which may include other wired or wireless networks that are owned and/or operated by other service providers.
[00130] FIG. 8E depicts a system diagram of the RAN 805 and the core network 809 according to an embodiment. The RAN 805 may be an access sen/ice network (ASN) that employs IEEE 802.16 radio technology to communicate with the WTRUs 802a, 802b, and/or 802c over the air interface 817. As will be further discussed below, the communication links between the different functional entities of the WTRUs 802a, 802b, and/or 802c, the RAN 805, and the core network 809 may be defined as reference points.
[00131] As shown in FIG. 8E, the RAN 805 may include base stations 880a, 880b, and/or
880c, and an ASN gateway 882, though it will be appreciated that the RAN 805 may include any number of base stations and ASN gateways while remaining consistent with an embodiment.
The base stations 880a, 880b, and/or 880c may each be associated with a particular cell (not shown) in the RAN 805 and may each include one or more transceivers for communicating with
the WTRUs 802a, 802b, and/or 802c over the air interface 817. In one embodiment, the base stations 880a, 880b, and/or 880c may implement MIMO technology. Thus, the base station 880a, for example, may use multiple antennas to transmit wireless signals to, and receive wireless signals from, the WTRU 802a. The base stations 880a, 880b, and/or 880c may also provide mobility management functions, such as handoff triggering, tunnel establishment, radio resource management, traffic classification, quality of service (QoS) policy enforcement, and the like. The ASN gateway 882 may serve as a traffic aggregation point and may be responsible for paging, caching of subscriber profiles, routing to the core network 809, and the like.
|00132f The air interface 817 between the WTRUs 802a, 802b, and/or 802c and the RAN 805 may be defined as an Rl reference point that implements the IEEE 802.16 specification. In addition, each of the WTRUs 802a, 802b, and/or 802c may establish a logical interface (not shown) with the core network 809. The logical interface between the WTRUs 802a, 802b, and/or 802c and the core network 809 may be defined as an R2 reference point, which may be used for authentication, authorization, IP host configuration management, and/or mobility management.
[00133] The communication link between each of the base stations 880a, 880b, and/or 880c may be defined as an R8 reference point that includes protocols for facilitating WTRU handovers and the transfer of data between base stations. The communication link between the base stations 880a, 880b, and/or 880c and the ASN gateway 882 may be defined as an R6 reference point. The R6 reference point may include protocols for facilitating mobility management based on mobility events associated with each of the WTRUs 802a, 802b, and/or 802c.
[00134] As shown in FIG. 8E, the RAN 805 may be connected to the core network 809. The communication link between the RAN 805 and the core network 809 may defined as an R3 reference point that includes protocols for facilitating data transfer and mobility management capabilities, for example. The core network 809 may include a mobile IP home agent (MIP-HA) 884, an authentication, authorization, accounting (AAA) server 886, and a gateway 888. While each of the foregoing elements are depicted as part of the core network 809, it will be appreciated that any one of these elements may be owned and/or operated by an entity other than the core network operator.
[00135] The MIP-HA may be responsible for IP address management, and may enable the
WTRUs 802a, 802b, and/or 802c to roam between different ASNs and/or different core networks. The MIP-HA 884 may provide the WTRUs 802a, 802b, and/or 802c with access to packet-switched networks, such as the Internet 810, to facilitate communications between the
WTRUs 802a, 802b, and/or 802c and IP -enabled devices. The AAA server 886 may be
responsible for user authentication and for supporting user sendees. The gateway 888 may facilitate interworking with other networks. For example, the gateway 888 may provide the WT Us 802a, 802b, and/or 802c with access to circuit-switched networks, such as the PSTN 808, to facilitate communications between the WTRUs 802a, 802b, and/or 802c and traditional land-line communications devices. In addition, the gateway 888 may provide the WTRUs 802a, 802b, and/or 802c with access to the networks 812, which may include other wired or wireless networks that are owned and/or operated by other service providers.
[00136] Although not shown in FIG. 8E, it should, may, and/or will be appreciated that the RAN 805 may be connected to other ASNs and the core network 809 may be connected to other core networks. The communication link between the RAN 805 the other ASNs may be defined as an R4 reference point, which may include protocols for coordinating the mobility of the WTRUs 802a, 802b, and/or 802c between the RAN 805 and the other ASNs. The
communication link between the core network 809 and the other core networks may be defined as an R5 reference, which may include protocols for facilitating interworking between home core networks and visited core networks.
[00137] Although features and elements are described above in particular combinations, one of ordinary skill in the art will appreciate that each feature or element can be used alone or in any combination with the other features and elements. In addition, the methods described herein may be implemented in a computer program, software, or firmware incorporated in a computer- readable medium for execution by a computer or processor. Examples of computer-readable media include electronic signals (transmitted over wired or wireless connections) and computer- readable storage media. Examples of computer-readable storage media include, but are not limited to, a read only memory (ROM), a random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs). A processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.
Claims
1. A method, comprising:
receiving, at a computing node in a wireless network, a computation request that comprises a script for execution;
parsing the script to determine one or more atoms referenced by the script, wherein an atom is an executable code component, and the one or more atoms are preloaded at one or more nodes in the wireless network prior to the receiving of the computation request;
determining where each of the one or more atoms are pre-loaded; and
upon detennining that ail of the one or more atoms are preloaded at the computing node, performing the computation request locally at the computing node; and
upon determining that at least one of the one or more atoms are not preloaded at the computing node:
sending a request for the atoms that are not pre-loaded locally, fetching the atoms that are not pre-loaded locally, and performing the computation request locally, or
determining that the computation request should be performed, at least in part, at another node.
2. The method of claim 1, wherein an atom uses Information Centric Network (ICN)-like content naming.
3. The method of claim 1, wherein the atom is provided by an application server.
4. The method of claim 1, further comprising receiving network condition information comprising one or more of computational loading, computational capabilities, node locations, and atom locations.
5. The method of claim 1 , wherein the computation request further comprises input data or an input data handle.
6. The method of claim 1, wherein the computation request compri ses one or more of frame information, a data, object name, a data object identifier, context information, and a payload name.
7. The method of claim 1, wherein, if the computing node does not include all of the one or more atoms referenced by the script, further comprising determining an estimated load on the network for fetching the one or more atoms that are not pre-loaded locally or having the additional node perform at least a part of the computation request.
8. The method of claim 1 , further comprising using an additional script to fetch the one or more atoms that are not pre-loaded locally,
9. The method of claim 1, further comprising publishing a result of the performed computation request.
10. The method of claim 1, wherein the computation request is received from an application client.
11. The method of claim 10, further comprising receiving an application policy comprising one or more of atom identifiers, atom information, number of atoms deployed in the network, a list of atoms commonly used together in a script, and a key to verify an atom, is from an application provider.
12. The method of claim 1, wherein fetching an atom that is not pre-loaded locally comprises subscribing for the atom, receiving the atom, program and metadata, and loading the atom in memory.
13. A computing node in a wireless network, comprising:
a processor configured to:
receive a computation request that comprises a script for execution;
parse the script to determine one or more atoms referenced by the script, wherein an atom is an executable code component, and the one or more atoms are preloaded at one or more nodes in the wireless network prior to the receiving of the computation request; determine where each of the one or more atoms are pre-loaded; and
upon determining that all of the one or more atoms are preloaded at the computing node, perform the computation request locally at the computing node; and
upon determining that at least one of the one or more atoms are not preloaded at the computing node:
send a request for the atoms that are not pre-loaded locally, fetch the atoms that are not pre-loaded locally, and perform the computation request locally, or
determine that the computation request should be performed, at least in part, at another node.
14. The computing node of claim 13, wherein an atom uses Information Centric Network (ICN)-like content naming.
15. The computing node of claim 13, wherein the atom is provided by an application server.
16. The computing node of claim 13, wherein the processor is further configured to receive network condition information comprising one or more of computational loading, computational capabilities, node locations, and atom locations.
17. The computing node of claim 13, wherein the computation request further comprises input data or an input data handle.
18. The computing node of claim 13, wherein the computation request comprises one or more of frame information, a data object name, a data object identifier, context information, and a pay load name.
19. The computing node of claim 13, wherein, if the computing node does not include all of the one or more atoms referenced by the script, wherein the processor is further configured to determine an estimated load on the network for: fetching the one or more atoms that are not preloaded locally, or having the additional node perform at least a part of the computation request.
20. The computing node of claim 13, wherein the processor is further configured to use an additional script to fetch the one or more atoms that are not pre-loaded locally .
21. The computing node of claim 13, wherein the processor is further configured to publish a
result of the performed computation request.
2,2. The computing node of claim 13, wherein the computation request is received from an application client.
23. The computing node of claim 22, wherein the processor is further configured to receive an application policy comprising one or more of atom identifiers, atom information, number of atoms deployed in the network, a list of atoms commonly used together in a script, and a key to verify an atom is from an application provider.
24. The computing node of claim 13, wherein fetching an atom that is not pre-loaded locally comprises subscribing for the atom, receiving the atom program and metadata, and loading the atom in memory.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201562269409P | 2015-12-18 | 2015-12-18 | |
US62/269,409 | 2015-12-18 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017106619A1 true WO2017106619A1 (en) | 2017-06-22 |
Family
ID=58054495
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2016/067133 WO2017106619A1 (en) | 2015-12-18 | 2016-12-16 | Systems and methods associated with edge computing |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2017106619A1 (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190325262A1 (en) * | 2018-04-20 | 2019-10-24 | Microsoft Technology Licensing, Llc | Managing derived and multi-entity features across environments |
US10715633B2 (en) | 2018-01-10 | 2020-07-14 | Cisco Technology, Inc. | Maintaining reachability of apps moving between fog and cloud using duplicate endpoint identifiers |
EP3692441A1 (en) * | 2017-10-06 | 2020-08-12 | Convida Wireless, LLC | Enabling a fog service layer with application to smart transport systems |
CN112130993A (en) * | 2020-09-07 | 2020-12-25 | 国网江苏省电力有限公司信息通信分公司 | Power edge Internet of things agent edge calculation method and system based on graphical modeling |
CN112130999A (en) * | 2020-09-23 | 2020-12-25 | 南方电网科学研究院有限责任公司 | Electric power heterogeneous data processing method based on edge calculation |
CN112886615A (en) * | 2021-03-23 | 2021-06-01 | 清华大学 | Secondary frequency modulation cooperative control system and method based on 5G and ubiquitous resources |
US11381575B2 (en) | 2019-05-03 | 2022-07-05 | Microsoft Technology Licensing, Llc | Controlling access to resources of edge devices |
US11601949B2 (en) | 2018-08-28 | 2023-03-07 | Koninklijke Philips N.V. | Distributed edge-environment computing platform for context-enabled ambient intelligence, environmental monitoring and control, and large-scale near real-time informatics |
US11704370B2 (en) | 2018-04-20 | 2023-07-18 | Microsoft Technology Licensing, Llc | Framework for managing features across environments |
CN118612309A (en) * | 2024-08-07 | 2024-09-06 | 国网湖北省电力有限公司电力科学研究院 | Intelligent gateway conversion method, device, equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070236345A1 (en) * | 2006-04-05 | 2007-10-11 | Motorola, Inc. | Wireless sensor node executable code request facilitation method and apparatus |
US20100235845A1 (en) * | 2006-07-21 | 2010-09-16 | Sony Computer Entertainment Inc. | Sub-task processor distribution scheduling |
US20130047165A1 (en) * | 2011-08-15 | 2013-02-21 | Sap Ag | Context-Aware Request Dispatching in Clustered Environments |
US20140067758A1 (en) * | 2012-08-28 | 2014-03-06 | Nokia Corporation | Method and apparatus for providing edge-based interoperability for data and computations |
-
2016
- 2016-12-16 WO PCT/US2016/067133 patent/WO2017106619A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070236345A1 (en) * | 2006-04-05 | 2007-10-11 | Motorola, Inc. | Wireless sensor node executable code request facilitation method and apparatus |
US20100235845A1 (en) * | 2006-07-21 | 2010-09-16 | Sony Computer Entertainment Inc. | Sub-task processor distribution scheduling |
US20130047165A1 (en) * | 2011-08-15 | 2013-02-21 | Sap Ag | Context-Aware Request Dispatching in Clustered Environments |
US20140067758A1 (en) * | 2012-08-28 | 2014-03-06 | Nokia Corporation | Method and apparatus for providing edge-based interoperability for data and computations |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3692441A1 (en) * | 2017-10-06 | 2020-08-12 | Convida Wireless, LLC | Enabling a fog service layer with application to smart transport systems |
US11614974B2 (en) | 2017-10-06 | 2023-03-28 | Convida Wireless, Llc | Enabling a fog service layer with application to smart transport systems |
US10715633B2 (en) | 2018-01-10 | 2020-07-14 | Cisco Technology, Inc. | Maintaining reachability of apps moving between fog and cloud using duplicate endpoint identifiers |
US20190325262A1 (en) * | 2018-04-20 | 2019-10-24 | Microsoft Technology Licensing, Llc | Managing derived and multi-entity features across environments |
US11704370B2 (en) | 2018-04-20 | 2023-07-18 | Microsoft Technology Licensing, Llc | Framework for managing features across environments |
US11601949B2 (en) | 2018-08-28 | 2023-03-07 | Koninklijke Philips N.V. | Distributed edge-environment computing platform for context-enabled ambient intelligence, environmental monitoring and control, and large-scale near real-time informatics |
US11381575B2 (en) | 2019-05-03 | 2022-07-05 | Microsoft Technology Licensing, Llc | Controlling access to resources of edge devices |
CN112130993A (en) * | 2020-09-07 | 2020-12-25 | 国网江苏省电力有限公司信息通信分公司 | Power edge Internet of things agent edge calculation method and system based on graphical modeling |
CN112130993B (en) * | 2020-09-07 | 2024-05-24 | 国网江苏省电力有限公司信息通信分公司 | Electric power edge internet of things proxy edge calculation method and system based on graphical modeling |
CN112130999A (en) * | 2020-09-23 | 2020-12-25 | 南方电网科学研究院有限责任公司 | Electric power heterogeneous data processing method based on edge calculation |
CN112130999B (en) * | 2020-09-23 | 2024-02-13 | 南方电网科学研究院有限责任公司 | Electric power heterogeneous data processing method based on edge calculation |
CN112886615A (en) * | 2021-03-23 | 2021-06-01 | 清华大学 | Secondary frequency modulation cooperative control system and method based on 5G and ubiquitous resources |
CN118612309A (en) * | 2024-08-07 | 2024-09-06 | 国网湖北省电力有限公司电力科学研究院 | Intelligent gateway conversion method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11888711B2 (en) | Capability exposure for service instantiation | |
WO2017106619A1 (en) | Systems and methods associated with edge computing | |
US11234213B2 (en) | Machine-to-machine (M2M) interface procedures for announce and de-announce of resources | |
JP6659627B2 (en) | Systems, methods, and apparatus for managing machine-to-machine (M2M) entities | |
EP3456090B1 (en) | Connecting to virtualized mobile core networks | |
JP6560259B2 (en) | Automated service profiling and orchestration | |
US10015293B2 (en) | Method and apparatus for incorporating an internet of things (IoT) service interface protocol layer in a node | |
JP2018512645A (en) | Message bus service directory | |
WO2018232253A1 (en) | Network exposure function | |
CN109451804B (en) | cNAP and method executed by cNAP and sNAP | |
EP2727295A1 (en) | Managing data mobility policies | |
EP2803208A2 (en) | Method and apparatus for supporting machine-to-machine communications | |
WO2013134211A2 (en) | Method and system for cdn exchange nterconnection | |
KR102500594B1 (en) | Service Layer Message Templates in Communication Networks | |
US20190104032A1 (en) | Elastic service provisioning via http-level surrogate management | |
WO2017070545A1 (en) | Software-defined network enhancements enabling programmable information centric networking in edge networks | |
EP4342158A1 (en) | Multi-access edge computing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16837994 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16837994 Country of ref document: EP Kind code of ref document: A1 |