WO2022031275A1 - Detection of memory modification - Google Patents
Detection of memory modification Download PDFInfo
- Publication number
- WO2022031275A1 WO2022031275A1 PCT/US2020/044922 US2020044922W WO2022031275A1 WO 2022031275 A1 WO2022031275 A1 WO 2022031275A1 US 2020044922 W US2020044922 W US 2020044922W WO 2022031275 A1 WO2022031275 A1 WO 2022031275A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- memory
- region
- modification
- address
- computing device
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/70—Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
- G06F21/78—Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure storage of data
- G06F21/79—Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure storage of data in semiconductor storage media, e.g. directly-addressable memories
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/64—Protecting data integrity, e.g. using checksums, certificates or signatures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/3003—Monitoring arrangements specially adapted to the computing system or computing system component being monitored
- G06F11/302—Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system component is a software system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3466—Performance evaluation by tracing or monitoring
- G06F11/3471—Address tracing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45562—Creating, deleting, cloning virtual machine instances
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45587—Isolation or security of virtual machine instances
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/865—Monitoring of software
Definitions
- malware malicious software
- Advanced malware incorporates a variety of techniques that help to evade static analysis and behaviour detection by security products, so their 10 malicious actions can be performed without being detected or blocked.
- enhanced detection of memory tampering to protect against malware.
- FIG. 1 shows an example process for identifying whether a function call is potentially 15 malicious.
- FIG. 2 shows another example process for identifying if a function call is potentially malicious.
- FIG. 3 shows another example process for identifying if a function call is potentially malicious.
- FIG. 4 shows an example process for identifying, within a virtual machine, if a function call is potentially malicious.
- FIG. 5 shows an example of a computing device configured to identify if a function call, within a virtual machine, is potentially malicious.
- FIG. 6 shows an example device for identifying if a function call is potentially 25 malicious.
- FIG. 7 shows an example of a computer readable medium comprising instructions to identify if a function call is potentially malicious.
- Malware authors may attempt to modify in-memory code to evade detection and to perform malicious actions without being blocked. For example, memory tampering that includes code modifications may allow function calls from application programming 5 interfaces (APIs) to be used to perform malicious actions without detection.
- APIs application programming 5 interfaces
- Malware may seek to exploit standard operations such as writing a file to disk, creating a process, loading code or data from a library, and creating a network connection. These actions may be triggered by malware-initiated system calls, which send requests to the operating system kernel to access high-privilege resources.
- Security solutions may attempt to detect the 10 performance of such activity by malicious code. For example, security solutions may use API hooking to intercept and record API calls.
- Hooking an API call may involve modifying inmemory code for an API call, such as replacing an instruction in the API code with an instruction that jumps to an alterative memory address before reading the memory.
- security solutions may have generated code referred to as trampoline 15 code”.
- This trampoline code may perform a set of actions before jumping back to the original function.
- these hooking actions may comprise: recording the API call; modifying the API call to prevent certain actions; and blocking certain instances of an API call to stop malicious activity.
- the trampoline code may perform the replaced instruction and jump back to the next instruction 20 in the API code.
- the original function may continue execution without either the code that called the API function or the operating system kernel being aware that the function call has been intercepted.
- malware authors have developed techniques to try to evade detection by security solutions.
- malware authors may generate code that removes hooks placed by security solutions or modifies the logic associated with a hook in order to evade detection. Malware authors may set their own hooks before performing malicious actions. If malware authors are successful in evading detection by security solutions, their malware may be able to perform malicious operations without being monitored or blocked by security solutions.
- the present disclosure describes how tampering and exploitation by malware may be detected and potentially prevented.
- the present disclosure may also distinguish between
- SUBSTITUTE SHEET (RULE 26) legitimate and malicious modifications, possibly reducing false positives’ (i.e. reducing incorrect identifications of malicious intent, such that trusted operations are still performed efficiently).
- a set of significant API calls are hooked to detect any functions using those calls. This may involve hooking any API calls which send requests 5 to the device's operating system kernel, and it may involve hooking native API functions rather than relying solely on detection of calls to windows APIs.
- the solution involves detecting potentially malicious operations by monitoring for code changes in a monitored region of memory that includes a set of protected functions, and applying assessment criteria based on the nature of the code modification together with a current address or target 10 address, and a return address, of the protected functions.
- the processor is configured by the program code to carry out the required processing.
- the term 'processor 1 is to be understood as logic capable of processing and responding to 25 instructions.
- the processor may take any form that would be understood in the technical field, such as hardware circuitry, software or systems. While specific examples of a processor and device architecture may be described below, this is not meant to limit the implementation of the processor or architecture to that particular description.
- An example detection method described below is provided within a virtualized environment in which 30 various operations are performed within separate virtual machines provided on the same computing device. In one example, multiple virtual machines are generated and only a subset of system resources are exposed to the functions executing within each virtual
- SUBSTITUTE SHEET (RULE 26) machine. If the multiple virtual machines each have low processing requirements and are isolated from each other, security can be enhanced without high processing overheads. By isolating non-trusted operations or potentially all user-initiated operations within their own dedicated virtual machines, it is possible to enhance the protection from those operations.
- a process for detection of memory tampering comprises: (i) identifying a region of memory to be monitored, the region of memory corresponding to the memory address space of a set of protected functions; (ii) 10 identifying a request for a code modification having a target address within the monitored region; and (iii) identifying the code modification as potentially malicious if the target address of the identified modified code is within the monitored region and the code modification is initiated from outside a trusted region of the memory.
- the trusted region of memory may comprise one or more regions of memory into which busted functions have been loaded (the start and end addresses being recorded when busted binaries such as antivirus software are loaded into memory) or which are dynamically allocated to trusted functions (e.g. valid addresses allocated to a common language runtime 20 (CLR) code or just-in-time (JIT) code).
- CLR common language runtime 20
- JIT just-in-time
- An example computing device comprises a memory and a processor.
- the processor may be controlled by instructions executed by the processor to monitor a region of the memory, which monitored region corresponds to an address space for a set of protected functions.
- the instructions may cause the processor to determine, in response to a request 25 for a code modification, whether a target address of the requested modification is within the monitored region; and to identify the request for a code modification as potentially malicious when the target address of the request for modification is within the monitored region and a retur address of the request for modification is outside a trusted region of the memory.
- the request for a code modification is identified as potentially 30 malicious if it is identified as affecting memory protection at the target address - for example if the request is to change an access permission or remove or modify a hook.
- the protected functions may comprise application programming interface (API) functions and functions within dynamic link libraries (DLLs).
- the detection process may detect a request for modification of the in-memory code, such as a change of access permissions or a change to the target address of a hook 5 on an API call that requests functions within an operating system kernel.
- the memory tampering can involve, for example, modifying or removing a hook on one of a set of native API calls including API calls that change memory permissions.
- the memory location of the code modification is within the monitored region, this may indicate that the modification has been made or requested at an address of a protected function; and if the code modification 10 is requested from outside a trusted region, it may be identified as potentially malicious. If such code modifications are allowed without checks or restrictions, they may result in a malicious program code making system calls without being blocked.
- the process may determine a return address of the caller function.
- This return address may be either 15 within or outside a trusted region of the memory. This distinction may indicate whether the modifications are legitimate or potentially malicious. For example, if the return address is within the monitored region or other trusted regions, this indication that modifications of a protected function have been made from a trusted region of the memory can be a basis for allowing the modification. This may be the situation where a trusted function or application 20 has modified in-memory code. In some examples, this may arise as a result of security solutions modifying a protected API, such as performing API-hooking. Operation of the modified function may then be allowed, thereby avoiding a potential consequence of an incorrect identification of malicious activity (i.e. avoiding the performance impact of a 'false positive*).
- the retur address may be outside the trusted region of the memory, which may indicate that the function call and corresponding modifications have been made by potentially malicious code.
- shellcode may have called a function and then tampered with the memory region.
- the malicious code may be attempting to avoid detection by security solutions before demonstrating malicious 30 behaviour. Operation of the called function may then be blocked , and an alert may be issued . The process may therefore prevent operation of malicious functions, at least until further analysis can be carried out for such potentially malicious operations.
- SUBSTITUTE SHEET (RULE 26) [0021] With reference to FIG. 1, there is shown an example process 100 for identifying if a function call is potentially malicious.
- the process 100 may monitor a region of memory of a computing device at 101.
- this monitored memory region may correspond to an address space for a set of protected functions, and malicious code 5 may attempt to modify these protected functions to perform malicious actions.
- malware may attempt to remove hooks placed by security solutions and/or add their own hooks.
- a target address of a function call may be identified at 102.
- the target address may be an address 10 within an address space in the memory at which the modifications are to be made or have been made. This may indicate that modifications to a protected function have been requested. If the target address of the function call is within the monitored region, a check may be performed of whether potentially malicious modifications have been requested to be made at the target address. If this function call is not intercepted and subjected to a risk 15 determination, modifications by malware may go undetected, allowing the malware to perform calls to the system kernel without being blocked.
- the function call retur address may be identified at 103. In some cases, this retur address may be the address from which the function call originated.
- the process may determine whether the function call is potentially malicious. For example, the process may 20 determine that the return address is within a trusted region of the memory. In this example, this may indicate that the in-memory modifications have been performed by a piece of protected code. For example, modifications by legitimate security solutions may be identified. If the retur address is instead outside the trusted region, this may indicate that modifications were made by potentially malicious code. For example, malware may be 25 attempting to remove legitimate hooks from security solutions, or add hooks of their own to jump to different memory locations. The process 100 may therefore distinguish between potentially malicious and legitimate modifications of in-memory code.
- the process begins by 30 monitoring a region 202 of memory 201.
- this monitored region may correspond to an address space of a set of protected functions in the memory. While the monitored region is represented as a continuous region of memory in Figure 2, the monitored region
- SUBSTITUTE SHEET may be a contiguous region or multiple separate regions of the memory.
- memory modification activity may be detected. For example, a piece of code may have made a function call to a target address and performed a number of actions. In some examples, this may comprise calling an API function at a target address or calling a function of a dynamic 5 link library (DLL), and then jumping to a new address or modifying the code representing the API function call.
- the process 200 may determine the target address of this function call/modification. For example, the target address may be determined to be within the monitored region 202 where critical functions are saved.
- the process may determine at 204 whether potentially malicious actions have been requested or performed at the target address. For example, the process may determine whether potentially malicious modifications to a protected function, such as an API function, have been requested or have been made.
- An example of a potentially malicious modification may be modification of 15 memory page protection at the target address.
- the process may allow operation of the target function. If potentially malicious actions are detected at 204, however, this may indicate that malicious code has modified the in-memory code of a protected function. For example, malware may have removed or replaced hooks placed by security solutions.
- the 20 process may then determine the retur address of the function call at 206.
- identification of a potentially malicious function may be based on whether the return address is within a trusted region of the memory.
- the trusted region 207 may be an address space including the monitored region 202, legitimate common language runtime (CLR) code, valid just-in-time (JIT) code, and whitelisted regions. For example, the 25 whitelisted regions may have been previously identified as safe to operate. If the return address is within the busted region 207, as shown by the pointer labelled “Return Address f , the modifications may have been made from trusted regions of the memory. For example, these actions may be related to actions performed by security solutions, such as hooking an API.
- the process may therefore identify the modifications being from a legitimate source, 30 and allow and/or whitelist the region of memory that made the function call at 208.
- whitelisted region may be used to update the trusted region 207. If the retur address is outside the trusted region, such as that shown by the pointer labelled “Return
- SUBSTITUTE SHEET (RULE 26) Address 2"
- the process may identify the function call as potentially originating from malicious code. This may occur where malware removes hooks placed by security solutions and/or adds malicious hooks. The process may proceed to issue an alert and/or block the function/modification at 209.
- the process 300 may monitor a region of memory.
- the monitored region may represent an address space for a set of protected functions, which may be segmented within the memory.
- the monitored region may comprise an address space for a list of loaded dynamic link libraries 10 (DLLs) and addresses for a list of critical API functions.
- DLLs loaded dynamic link libraries 10
- the process 300 may obtain a list of the loaded DLLs at 301 and store respective address spaces and module names at 302.
- the process 300 may also obtain addresses and memory regions for critical APIs or API functions at 303 and 304, respectively.
- the critical API functions may comprise those API functions that initiate system calls to the operating system kernel, such as a set 15 of native API calls, which are hooked to monitor for malicious activity. Detection of modifications to the hooks on these native API calls and detection of modifications within the address space of the loaded DLLs combine to provide enhanced security. Any modifications to functions within the monitored region, as a result of a function call, may be detected.
- the process may determine a target address of a function call. For example, 20 requests for modification of the in-memory code may be detected and the process may determine the address at which the modifications have been requested or made. Depending on this target address, the process may proceed differently at 306. If the target address is outside the monitored region of the memory, this may indicate that protected functions are not being targeted. The process 300 may then allow the function/modification at 307. For 25 example, the process 300 may allow the targeted function to make calls to the system kernel .
- the target address may be within the monitored region.
- a function within the monitored region may be called and modified. Since modifications are made to a protected function within the monitored region, these may be potentially malicious.
- the process 300 may determine at 308 whether potentially malicious 30 modifications have been made at the target address.
- the types of malicious modifications may also be updated over time based on developing detection models. An example of this may include checking for similar patters of behaviour of previously identified malware.
- SUBSTITUTE SHEET (RULE 26)
- Potentially malicious modifications may comprise, but are not limited to, modifying memory page protection and/or an original address of the function at the target address.
- the process may proceed from 309 to allow operation of the function at 310. For example, because 5 potentially malicious modifications have not been made at the target address then the modified function may be safe to operate. The modified function may then be free to make system calls without being blocked. If potentially malicious modifications have been made at that target address, this may indicate that malicious modifications have been made to a protected function. This may be the case where malware has removed hooks placed by 10 security solutions.
- the process may proceed from 309 to determine a return address of the function call at 311. For example, the process may determine an address from which the function call was made that resulted in the modifications. Identification of whether the function call is potentially malicious may then be further based on the determined retur address.
- the process 300 may determine whether the return address lies within a trusted region of the memory.
- This trusted region may represent a region of memory from which legitimate function calls and modifications may be made.
- the trusted region may be an address space including the monitored region, legitimate CLR code, valid JIT code, whitelisted regions and/or boundaries of code sections. If the retur address is 20 within this trusted region, the modifications may have been made by trusted code. In an «(ample, the modifications may have been made by legitimate security solutions. This may be the case where security solutions perform API hooking, this action having been determined as potentially malicious at 308.
- the process may therefore allow and/or whitelist the function at 313.
- the region of memory representing this whitelisted function may then 25 be used to update the trusted region for future processes.
- the process may instead identify that the function call and modifications have been made from malicious code, such as malware shellcode.
- malware may have modified the function at the target address to remove hooks placed by security solutions and/or add hooks of their own.
- the process may determine at 314 a memory region of the shellcode.
- this memory region may be contents of the malware code.
- This section of code may be that responsible for making the call to a function at the target address, and modifying said
- SUBSTITUTE SHEET (RULE 26) function.
- the process may dump the contents of this shellcode, and the determination may be used to build malware detection and defence rules and databases for enhancing future detection processes.
- the process may issue an alert and/or block operation of the modified function.
- the shellcode contents may also be included as part of the alert.
- the 5 process 300 therefore monitors for modification of protected functions within a memory, differentiating between legitimate and malicious modifications. Consequently, security solutions may perform API hooking without the application of hooks being identified as false positives. It should be understood that the process 300 is one example of a process that may be used to identify if a caller function is potentially malicious. In other examples, the 10 procedure shown in process 300 may modified to include and/or remove steps, and perform steps in different orders.
- the identification of whether a function call or called function is potentially malicious may be performed in a virtualized environment.
- the 15 application may be run inside a virtual machine which is a process running on the computing device that emulates some or all functions of a separate computing device and operates like a separate, independent computing device within the host computer.
- a virtual machine which is a process running on the computing device that emulates some or all functions of a separate computing device and operates like a separate, independent computing device within the host computer.
- lightweight virtual machines or “micro virtual machines” are created on demand for a plurality of operations 20 running within the same computer system.
- a micro virtual machine is a process that is isolated from other micro virtual machines and requires only a small amount of the total system resources (e.g.
- each micro virtual machine may require access to different resources of the computing device, each micro virtual machine may be created from the same template or a set of templates running on the same device and making use of the same underlying system
- BIOS BIOS
- operating system is software that manages the computing device’s hardware and software resources and provides common services for use by application programs that device users interact with.
- BIOS basic input/output system
- OS operating system 5
- V M virtual machine
- Instructions included within a BIOS may be software, firmware, microcode, or other programming that defines or controls functionality or operation of a BIOS.
- a BIOS may be implemented using instructions, such as platform firmware of a computing device, executable by a processor.
- a BIOS may operate or execute prior to the execution of the OS of a computing device.
- a 10 BIOS may initialize, control, or operate components such as hardware components of a computing device and may load or boot the OS of computing device.
- a BIOS may provide or establish an interface between hardware devices or platform firmware of the computing device and an OS of the computing device, via which the OS of the computing device may control or operate hardware devices or platform firmware of the 15 computing device.
- a BIOS may implement the Unified Extensible Firmware Interface (UEFI) specification or another specification or standard for initializing, controlling, or operating a computing device.
- UEFI Unified Extensible Firmware Interface
- a micro virtual machine can be used to isolate a potentially untrusted process from the computer's host operating system and from applications running within other virtual 20 machines.
- Each micro virtual machine may be used to run a limited number of applications at one time or a single application or even a single task within an application, with the execution of applications and tasks in one micro virtual machine being isolated from other virtual machines running on the same device or system.
- Many micro virtual machines may be run at one time in order to compartmentalize the execution of applications and/or other 25 processes running in the computing device. This can provide enhanced security by reducing the potential for contamination between executing processes on separate micro VMs, and by containing untrusted operations.
- the micro virtual machines are lightweight virtual machines that can be created, maintained and terminated on-demand, and may exist for a limited time while the application within the micro virtual 30 machine is running, before being terminated when their intended purpose is complete.
- any code execution which may be potentially malicious can be
- SUBSTITUTE SHEET contained within its own micro virtual machine, which is then destroyed after its intended use is completed or upon identification of memory tampering, thereby disallowing malicious code from effecting any lasting change to a computing device or the network.
- the micro virtual machine may run a local application or an individual web page session.
- a user-initiated operation is completed, such as running a local application or when navigating away from a web page to another page with a different Internet URL domain, the corresponding micro virtual machine can be destroyed. Any new local application or web application can then be run inside a new separate micro virtual machine that may be cloned from a micro virtual machine master template.
- the combination of the detection of memory tampering as described herein, and the 15 isolation of operations within a plurality of virtual machines in the described virtual machine environment may allow enhanced security on a computing device. For example, malicious activity may be carried out in an application that is running on a virtual machine that is only given access to limited system resources, and is isolated from applications running in other virtual machines on the same device.
- the code running inside any virtual machine may be 20 highly controlled and kept separate from processes of the underlying operating system. This may enhance malware detection and resolution because modifications to code accessible by the virtual machine can be contained and removed without compromising the execution of other processes running within another virtual machine.
- the isolation of processes makes it 25 possible to quickly and efficiently terminate a potentially malicious operation, or to quarantine relevant processes, without affecting other processes running in other virtual machines.
- the isolation between processes combined with the efficiency of process termination enables a rapid reaction to a potentially malicious operation, and reduces the motivation to collect additional behavioural indicators before taking action.
- the isolation between 30 virtual machines also reduces the potential consequences of malicious operations that are performed within any single virtual machine. Therefore, the described solution for detection of memory tampering is well suited to a virtualised environment in which lightweight micro
- SUBSTITUTE SHEET (RULE 26) virtual machines are created for new operations in advance of determining whether or not they are trusted or potentially malicious operations.
- the isolation between virtual machines may also facilitate the detection of malicious activity, because of the limited number of processes and resources to be considered within any one virtual machine environment. For 5 example, if no applications within the virtual machine are expected to modify the in-memory code accessible by the virtual machine, any detected modifications may indicate potentially malicious activity.
- a computing device may receive a request to run an application at 401.
- a virtual machine may be generated in the computing device at 402.
- a processor may be configured to generate a virtual machine in response to initiation of certain types of 15 operation - such as in response to each system call that requires a function within the operating system kernel of the computing device.
- a processor may identify whether the function call within the virtual machine is potentially malicious.
- a function call may be identified as potentially malicious in accordance with any of the examples described above.
- the virtual machine may be a micro virtual machine, 20 and many of these micro virtual machines may be running simultaneously.
- the process reports the identification of a potentially malicious operation and terminates the operation and/or quarantines the process that initiated the operation. The reporting allows collection of data for operations identified as malicious or potentially malicious, and processing for visualisation and analysis.
- an example computing device 500 comprising a hardware and BIOS layer that in this example includes the processor 501.
- Processor 501 and a host operating system 502 control major operations of the computing device 500.
- the computing device 500 may be a computing device running a hypervisor 503 that is capable of creating and managing virtual machines 504, 505.
- User-initiated operations 30 within application programs 506, 507, 508 run within a respective one of the virtual machines 504, 505 which each run an operating system emulation 502', 502”.
- Examples of computing device 500 include a PC, a laptop computer, a tablet computer, a cell phone, a personal digital assistant (PDA), and the like.
- PDA personal digital assistant
- the hypervisor 503 may be used to create a first virtual machine 504, using services provided by the host operating system 502.
- the virtual machine may be 5 generated in response to a request for an application to run, and the hypervisor allocates required system resources.
- the application 506, 507 and an emulated operating system instance 502 * within the virtual machine 504 run in isolation from the host operating system 502 of computing device 500.
- the virtual machine may be a micro virtual machine that is created by a Microvisor.
- the Microvisor is a hypervisor that is adapted to 10 use hardware virtualisation support of the computing device on which the Microvisor runs and to create virtual machines that are tailored to support a particular task, with only the required system resources being allocated to each micro virtual machine.
- a micro virtual machine can be created for each new application-level operation that has potential vulnerabilities, such as a user-selection of a browser tab or email attachment.
- a virtual machine is created and destroyed as soon as the relevant task is complete; in other examples, the virtual machine may remain in a suspended state following its creation, until an application is to be opened, and then the virtual machine 504, 505 is reactivated.
- An application running in the virtual machine may make a function call to request services of the emulated operating system 502’ within the same virtual machine.
- the function call may have a target address within a region of memory that is accessible to the virtual machine, and the function call may lead to a modification of inmemory code at this target address.
- the processor may be running a malware detection process within the hypervisor (which may be a Microvisor) or as a detection process running within each virtual machine, to identify whether each function call 25 is potentially malicious.
- the malware detection process may identify tampering of memory accessible by the micro virtual machine. Identifying whether a function call is potentially malicious may be performed in accordance with the process described in FIG. 4.
- An example computing device creates a micro virtual machine for execution of an operation, which may be an operation type having a security vulnerability.
- a process running 30 on the device detects requests for in-memory code modifications as part of the operation running within the virtual machine, and identifies attempts at memory tampering by: monitoring a region of memory accessed via the virtual machine, the monitored region
- SUBSTITUTE SHEET (RULE 26) corresponding to an address space for a set of protected functions; determining, when a request is made for modification of the memory, whether a target address of the request is within the monitored region; and identifying the requested modification as potentially malicious if the target address of the request is within the monitored region; memory 5 protection at the target address has been modified; and a retur address of the request is outside a trusted region. If the operation is identified as malicious, the virtual machine can be terminated.
- the combination of highly granular task-specific malware detection, based on target addresses and return addresses before operations are completed, with highly granular micro virtual machines is advantageous to mitigate security risks.
- a micro virtual machine may be disposable, wherein it may be created, maintained, and destroyed on-demand. Such virtual machines may exist for a limited time that an application is running within them. For example, the virtual machine may be destroyed once an application is terminated.
- a plurality of virtual machines may be running concurrently on the computing device 500. For example, a plurality of micro virtual machines may run 15 concurrently. Different applications may be running in different virtual machines on the computing device.
- respective operating system images of the computing device 500 at the time of creation of the virtual machines may also be created for running in the respective virtual machines. Therefore, a virtual machine may possess its own instance or copy of the operating system, which is isolated and separate from the main operating 20 system executing within the computing device 500.
- an example device 600 comprising a processor 601 and a random access memory 602, a read only memory 603 and a storage device 604.
- a communication interface 605 provides communication via a network link 606.
- the device s main processing hardware is connected to a display device 607 and an input
- the computing device 600 may form part of a larger multiprocessor computer system.
- the processor may be configured by software such as detection code within the above-described hypervisor to determine whether a function call is potentially malicious.
- the processor may be configured to detect whether potentially modifications have been made to the memory 602.
- Identification of potentially malicious function calls may be performed in accordance with any of the examples described above.
- FIG. 7 shows an example of a computer readable medium 702, which is a non- transitory storage medium storing instructions 710, 711, 712, 713 that, when executed by a processor 700 coupled to a memory 701, cause the processor 700 to identify whether a function call is potentially malicious in accordance with the examples described above.
- the 5 term “non-transitory storage medium” does not encompass transitory propagating signals.
- the executable instructions may cause the processor to generate a virtual machine.
- the computer readable medium may cause the processor to identify whether a requested operation within the micro virtual machine is potentially malicious. For example, the processor may identify a potentially malicious function call in accordance with the 10 process described with respect to the FIG. 4.
- the computer readable medium 702 may be any form of storage device capable of storing executable instructions, such as a nontransient computer readable medium, for example Random Access Memory (RAM), Electrically-Erasable Programmable Read-Only Memory (EEPROM), a storage drive, an optical disc, or the like.
- RAM Random Access Memory
- EEPROM Electrically-Erasable Programmable Read-Only Memory
- storage drive an optical disc, or the like.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Computer Security & Cryptography (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Health & Medical Sciences (AREA)
- Bioethics (AREA)
- General Health & Medical Sciences (AREA)
- Storage Device Security (AREA)
Abstract
Described are an example computing device, recording medium, and process for detection of memory tampering. This may involve monitoring a region of memory corresponding to an address space of protected functions. Requests for memory modification are evaluated to identify requests with a target address within the monitored region. For these requests, a determination is made of whether the return address is inside or outside a trusted region of memory. Requests for memory modification within the monitored region and which have a return address outside the trusted region are treated as potentially malicious and reported and their operations may be blocked. An example computing device creates a virtual machine for execution of an operation that involves a security vulnerability, and detects in-memory code modifications within that virtual machine. If the operation is identified as malicious, the virtual machine can be terminated.
Description
Detection of memory modification
BACKGROUND
[0001] There is a need to protect data and resources of a computing device against 5 unauthorised intrusion, including protecting against execution of malicious software which attempts to corrupt or obtain access to data on the device or within a connected network of devices. Security software solutions attempt to detect tampering and exploitation of memory by malicious software (“malware"). Advanced malware incorporates a variety of techniques that help to evade static analysis and behaviour detection by security products, so their 10 malicious actions can be performed without being detected or blocked. There is a need for enhanced detection of memory tampering to protect against malware.
BRIEF DESCRIPTION OF THE DRAWINGS
[0002] FIG. 1 shows an example process for identifying whether a function call is potentially 15 malicious.
[0003] FIG. 2 shows another example process for identifying if a function call is potentially malicious.
[0004] FIG. 3 shows another example process for identifying if a function call is potentially malicious.
20 [0005] FIG. 4 shows an example process for identifying, within a virtual machine, if a function call is potentially malicious.
[0006] FIG. 5 shows an example of a computing device configured to identify if a function call, within a virtual machine, is potentially malicious.
[0007] FIG. 6 shows an example device for identifying if a function call is potentially 25 malicious.
[0008] FIG. 7 shows an example of a computer readable medium comprising instructions to identify if a function call is potentially malicious.
SUBSTITUTE SHEET (RULE 26)
DETAILED DESCRIPTION
[0009] Malware authors may attempt to modify in-memory code to evade detection and to perform malicious actions without being blocked. For example, memory tampering that includes code modifications may allow function calls from application programming 5 interfaces (APIs) to be used to perform malicious actions without detection. Malware may seek to exploit standard operations such as writing a file to disk, creating a process, loading code or data from a library, and creating a network connection. These actions may be triggered by malware-initiated system calls, which send requests to the operating system kernel to access high-privilege resources. Security solutions may attempt to detect the 10 performance of such activity by malicious code. For example, security solutions may use API hooking to intercept and record API calls. Hooking an API call may involve modifying inmemory code for an API call, such as replacing an instruction in the API code with an instruction that jumps to an alterative memory address before reading the memory. At this alternative address, security solutions may have generated code referred to as trampoline 15 code”. This trampoline code may perform a set of actions before jumping back to the original function. For example, these hooking actions may comprise: recording the API call; modifying the API call to prevent certain actions; and blocking certain instances of an API call to stop malicious activity. After performing actions at the alterative address, the trampoline code may perform the replaced instruction and jump back to the next instruction 20 in the API code. The original function may continue execution without either the code that called the API function or the operating system kernel being aware that the function call has been intercepted.
[0010] As security system architectures and malware have both become more advanced, malware authors have developed techniques to try to evade detection by security solutions.
25 For example, malware authors may generate code that removes hooks placed by security solutions or modifies the logic associated with a hook in order to evade detection. Malware authors may set their own hooks before performing malicious actions. If malware authors are successful in evading detection by security solutions, their malware may be able to perform malicious operations without being monitored or blocked by security solutions.
30 [0011] The present disclosure describes how tampering and exploitation by malware may be detected and potentially prevented. The present disclosure may also distinguish between
SUBSTITUTE SHEET (RULE 26)
legitimate and malicious modifications, possibly reducing false positives’ (i.e. reducing incorrect identifications of malicious intent, such that trusted operations are still performed efficiently). In a solution described below, a set of significant API calls are hooked to detect any functions using those calls. This may involve hooking any API calls which send requests 5 to the device's operating system kernel, and it may involve hooking native API functions rather than relying solely on detection of calls to windows APIs. The solution involves detecting potentially malicious operations by monitoring for code changes in a monitored region of memory that includes a set of protected functions, and applying assessment criteria based on the nature of the code modification together with a current address or target 10 address, and a return address, of the protected functions.
[0012] In the following detailed description, reference is made to the accompanying drawings which form a part of this patent specification and which show, byway of illustration, specific examples of devices, systems, methods, and computer programs in which the disclosed solutions may be practiced. It is to be understood that other examples may be 15 utilized and structural or logical changes to the described examples may be made without departing from the scope of the present disclosure. The following detailed description, therefore, is not to be taken in a limiting sense, and the scope of protection for the disclosed solutions is defined by the appended claims. It is to be understood that features of the various examples described herein may be combined with each other, in part or as a whole, unless 20 specifically noted otherwise.
[0013] Described below are functions that are imp!ementable in computer program code, for execution by a processor or a plurality of processors. The processor is configured by the program code to carry out the required processing. In the following description, the term 'processor1 is to be understood as logic capable of processing and responding to 25 instructions. The processor may take any form that would be understood in the technical field, such as hardware circuitry, software or systems. While specific examples of a processor and device architecture may be described below, this is not meant to limit the implementation of the processor or architecture to that particular description. An example detection method described below is provided within a virtualized environment in which 30 various operations are performed within separate virtual machines provided on the same computing device. In one example, multiple virtual machines are generated and only a subset of system resources are exposed to the functions executing within each virtual
SUBSTITUTE SHEET (RULE 26)
machine. If the multiple virtual machines each have low processing requirements and are isolated from each other, security can be enhanced without high processing overheads. By isolating non-trusted operations or potentially all user-initiated operations within their own dedicated virtual machines, it is possible to enhance the protection from those operations.
5 Detection solutions described below are suitable for use within a virtualized environment, but the present disclosure is not limited to use of virtualisation.
[0014] In an example within the present disclosure, a process for detection of memory tampering comprises: (i) identifying a region of memory to be monitored, the region of memory corresponding to the memory address space of a set of protected functions; (ii) 10 identifying a request for a code modification having a target address within the monitored region; and (iii) identifying the code modification as potentially malicious if the target address of the identified modified code is within the monitored region and the code modification is initiated from outside a trusted region of the memory.
[0015] This can provide a highly-granular task-specific detection process, as each request 15 for a memory modification can be evaluated quickly based on target and return addresses. The trusted region of memory may comprise one or more regions of memory into which busted functions have been loaded (the start and end addresses being recorded when busted binaries such as antivirus software are loaded into memory) or which are dynamically allocated to trusted functions (e.g. valid addresses allocated to a common language runtime 20 (CLR) code or just-in-time (JIT) code).
[0016] An example computing device comprises a memory and a processor. The processor may be controlled by instructions executed by the processor to monitor a region of the memory, which monitored region corresponds to an address space for a set of protected functions. The instructions may cause the processor to determine, in response to a request 25 for a code modification, whether a target address of the requested modification is within the monitored region; and to identify the request for a code modification as potentially malicious when the target address of the request for modification is within the monitored region and a retur address of the request for modification is outside a trusted region of the memory. [0017] In an example, the request for a code modification is identified as potentially 30 malicious if it is identified as affecting memory protection at the target address - for example if the request is to change an access permission or remove or modify a hook.
SUBSTITUTE SHEET (RULE 26)
[0018] In some examples, the protected functions may comprise application programming interface (API) functions and functions within dynamic link libraries (DLLs). In some examples, the detection process may detect a request for modification of the in-memory code, such as a change of access permissions or a change to the target address of a hook 5 on an API call that requests functions within an operating system kernel. The memory tampering can involve, for example, modifying or removing a hook on one of a set of native API calls including API calls that change memory permissions. If the memory location of the code modification is within the monitored region, this may indicate that the modification has been made or requested at an address of a protected function; and if the code modification 10 is requested from outside a trusted region, it may be identified as potentially malicious. If such code modifications are allowed without checks or restrictions, they may result in a malicious program code making system calls without being blocked.
[0019] If code modifications within a monitored region of memory are detected, the process may determine a return address of the caller function. This return address may be either 15 within or outside a trusted region of the memory. This distinction may indicate whether the modifications are legitimate or potentially malicious. For example, if the return address is within the monitored region or other trusted regions, this indication that modifications of a protected function have been made from a trusted region of the memory can be a basis for allowing the modification. This may be the situation where a trusted function or application 20 has modified in-memory code. In some examples, this may arise as a result of security solutions modifying a protected API, such as performing API-hooking. Operation of the modified function may then be allowed, thereby avoiding a potential consequence of an incorrect identification of malicious activity (i.e. avoiding the performance impact of a 'false positive*).
25 [0020] In some examples, the retur address may be outside the trusted region of the memory, which may indicate that the function call and corresponding modifications have been made by potentially malicious code. For example, shellcode may have called a function and then tampered with the memory region. In some examples, the malicious code may be attempting to avoid detection by security solutions before demonstrating malicious 30 behaviour. Operation of the called function may then be blocked , and an alert may be issued . The process may therefore prevent operation of malicious functions, at least until further analysis can be carried out for such potentially malicious operations.
SUBSTITUTE SHEET (RULE 26)
[0021] With reference to FIG. 1, there is shown an example process 100 for identifying if a function call is potentially malicious. In this example, the process 100 may monitor a region of memory of a computing device at 101. In some examples, this monitored memory region may correspond to an address space for a set of protected functions, and malicious code 5 may attempt to modify these protected functions to perform malicious actions. For example, malware may attempt to remove hooks placed by security solutions and/or add their own hooks.
[0022] In response to a request for modification of in-memory code, a target address of a function call may be identified at 102. For example, the target address may be an address 10 within an address space in the memory at which the modifications are to be made or have been made. This may indicate that modifications to a protected function have been requested. If the target address of the function call is within the monitored region, a check may be performed of whether potentially malicious modifications have been requested to be made at the target address. If this function call is not intercepted and subjected to a risk 15 determination, modifications by malware may go undetected, allowing the malware to perform calls to the system kernel without being blocked.
[0023] The function call retur address may be identified at 103. In some cases, this retur address may be the address from which the function call originated. At 104, the process may determine whether the function call is potentially malicious. For example, the process may 20 determine that the return address is within a trusted region of the memory. In this example, this may indicate that the in-memory modifications have been performed by a piece of protected code. For example, modifications by legitimate security solutions may be identified. If the retur address is instead outside the trusted region, this may indicate that modifications were made by potentially malicious code. For example, malware may be 25 attempting to remove legitimate hooks from security solutions, or add hooks of their own to jump to different memory locations. The process 100 may therefore distinguish between potentially malicious and legitimate modifications of in-memory code.
[0024] With reference to FIG. 2, there is shown an example process 200 for determining whether a function call is potentially malicious. In this example, the process begins by 30 monitoring a region 202 of memory 201. For example, this monitored region may correspond to an address space of a set of protected functions in the memory. While the monitored region is represented as a continuous region of memory in Figure 2, the monitored region
SUBSTITUTE SHEET (RULE 26)
may be a contiguous region or multiple separate regions of the memory. At 203, memory modification activity may be detected. For example, a piece of code may have made a function call to a target address and performed a number of actions. In some examples, this may comprise calling an API function at a target address or calling a function of a dynamic 5 link library (DLL), and then jumping to a new address or modifying the code representing the API function call. The process 200 may determine the target address of this function call/modification. For example, the target address may be determined to be within the monitored region 202 where critical functions are saved. This may be a first indication that the function call is potentially malicious, so as to justify a check of whether potentially harmful 10 modifications may have been made at the target address. The process may determine at 204 whether potentially malicious actions have been requested or performed at the target address. For example, the process may determine whether potentially malicious modifications to a protected function, such as an API function, have been requested or have been made. An example of a potentially malicious modification may be modification of 15 memory page protection at the target address.
[0025] If no potentially malicious actions are detected, the process may allow operation of the target function. If potentially malicious actions are detected at 204, however, this may indicate that malicious code has modified the in-memory code of a protected function. For example, malware may have removed or replaced hooks placed by security solutions. The 20 process may then determine the retur address of the function call at 206.
[0026] In some examples, identification of a potentially malicious function may be based on whether the return address is within a trusted region of the memory. The trusted region 207 may be an address space including the monitored region 202, legitimate common language runtime (CLR) code, valid just-in-time (JIT) code, and whitelisted regions. For example, the 25 whitelisted regions may have been previously identified as safe to operate. If the return address is within the busted region 207, as shown by the pointer labelled “Return Address f , the modifications may have been made from trusted regions of the memory. For example, these actions may be related to actions performed by security solutions, such as hooking an API. The process may therefore identify the modifications being from a legitimate source, 30 and allow and/or whitelist the region of memory that made the function call at 208. In some examples, whitelisted region may be used to update the trusted region 207. If the retur address is outside the trusted region, such as that shown by the pointer labelled “Return
SUBSTITUTE SHEET (RULE 26)
Address 2", the process may identify the function call as potentially originating from malicious code. This may occur where malware removes hooks placed by security solutions and/or adds malicious hooks. The process may proceed to issue an alert and/or block the function/modification at 209.
5 [0027] With reference to FIG. 3, there is shown an example process 300 for determining whether a function call is potentially malicious. For example, the process 300 may monitor a region of memory. The monitored region may represent an address space for a set of protected functions, which may be segmented within the memory. In some examples, the monitored region may comprise an address space for a list of loaded dynamic link libraries 10 (DLLs) and addresses for a list of critical API functions. In this example the process 300 may obtain a list of the loaded DLLs at 301 and store respective address spaces and module names at 302. The process 300 may also obtain addresses and memory regions for critical APIs or API functions at 303 and 304, respectively. The critical API functions may comprise those API functions that initiate system calls to the operating system kernel, such as a set 15 of native API calls, which are hooked to monitor for malicious activity. Detection of modifications to the hooks on these native API calls and detection of modifications within the address space of the loaded DLLs combine to provide enhanced security. Any modifications to functions within the monitored region, as a result of a function call, may be detected.
[0028] At 305 the process may determine a target address of a function call. For example, 20 requests for modification of the in-memory code may be detected and the process may determine the address at which the modifications have been requested or made. Depending on this target address, the process may proceed differently at 306. If the target address is outside the monitored region of the memory, this may indicate that protected functions are not being targeted. The process 300 may then allow the function/modification at 307. For 25 example, the process 300 may allow the targeted function to make calls to the system kernel .
[0029] In some examples, the target address may be within the monitored region. For example, a function within the monitored region may be called and modified. Since modifications are made to a protected function within the monitored region, these may be potentially malicious. The process 300 may determine at 308 whether potentially malicious 30 modifications have been made at the target address. The types of malicious modifications may also be updated over time based on developing detection models. An example of this may include checking for similar patters of behaviour of previously identified malware.
SUBSTITUTE SHEET (RULE 26)
Potentially malicious modifications may comprise, but are not limited to, modifying memory page protection and/or an original address of the function at the target address.
[0030] If potentially malicious modifications have not been requested or made, the process may proceed from 309 to allow operation of the function at 310. For example, because 5 potentially malicious modifications have not been made at the target address then the modified function may be safe to operate. The modified function may then be free to make system calls without being blocked. If potentially malicious modifications have been made at that target address, this may indicate that malicious modifications have been made to a protected function. This may be the case where malware has removed hooks placed by 10 security solutions. In this example the process may proceed from 309 to determine a return address of the function call at 311. For example, the process may determine an address from which the function call was made that resulted in the modifications. Identification of whether the function call is potentially malicious may then be further based on the determined retur address.
15 [0031] The process 300 may determine whether the return address lies within a trusted region of the memory. This trusted region may represent a region of memory from which legitimate function calls and modifications may be made. In some examples the trusted region may be an address space including the monitored region, legitimate CLR code, valid JIT code, whitelisted regions and/or boundaries of code sections. If the retur address is 20 within this trusted region, the modifications may have been made by trusted code. In an «(ample, the modifications may have been made by legitimate security solutions. This may be the case where security solutions perform API hooking, this action having been determined as potentially malicious at 308. The process may therefore allow and/or whitelist the function at 313. The region of memory representing this whitelisted function may then 25 be used to update the trusted region for future processes. If the retur address lies outside the trusted region, the process may instead identify that the function call and modifications have been made from malicious code, such as malware shellcode. For example, malware may have modified the function at the target address to remove hooks placed by security solutions and/or add hooks of their own.
30 [0032] The process may determine at 314 a memory region of the shellcode. For example, this memory region may be contents of the malware code. This section of code may be that responsible for making the call to a function at the target address, and modifying said
SUBSTITUTE SHEET (RULE 26)
function. At 315 the process may dump the contents of this shellcode, and the determination may be used to build malware detection and defence rules and databases for enhancing future detection processes. At 316 the process may issue an alert and/or block operation of the modified function. The shellcode contents may also be included as part of the alert. The 5 process 300 therefore monitors for modification of protected functions within a memory, differentiating between legitimate and malicious modifications. Consequently, security solutions may perform API hooking without the application of hooks being identified as false positives. It should be understood that the process 300 is one example of a process that may be used to identify if a caller function is potentially malicious. In other examples, the 10 procedure shown in process 300 may modified to include and/or remove steps, and perform steps in different orders.
[0033] In some examples, the identification of whether a function call or called function is potentially malicious may be performed in a virtualized environment. For example, when a user of a computing device wishes to run an application on a computing device, the 15 application may be run inside a virtual machine which is a process running on the computing device that emulates some or all functions of a separate computing device and operates like a separate, independent computing device within the host computer. In one example virtualisation environment using hardware-assisted virtualization, lightweight virtual machines (or “micro virtual machines") are created on demand for a plurality of operations 20 running within the same computer system. A micro virtual machine is a process that is isolated from other micro virtual machines and requires only a small amount of the total system resources (e.g. CPU time and allocated RAM); and a plurality of micro virtual machines can be run simultaneously on the same computing device or system without interaction between their applications. There could be dozens or even hundreds of micro 25 VMs running on the same computing device at the same time, with each one being created for a different process and being destroyed when that process terminates.
[0034] Although each micro virtual machine may require access to different resources of the computing device, each micro virtual machine may be created from the same template or a set of templates running on the same device and making use of the same underlying system
30 hardware, BIOS, and operating system. As is well known, the operating system is software that manages the computing device’s hardware and software resources and provides common services for use by application programs that device users interact with. For
SUBSTITUTE SHEET (RULE 26)
virtualisation, a hypervisor (supervisor code) may be running above the operating system to manage the creation of and allocation of resources to virtual machines. As used herein, a basic input/output system (BIOS) refers to hardware, or hardware and instructions, to initialize, control, or operate a computing device prior to execution of an operating system 5 (OS) or virtual machine (V M) of the computing device. Instructions included within a BIOS may be software, firmware, microcode, or other programming that defines or controls functionality or operation of a BIOS. In one example, a BIOS may be implemented using instructions, such as platform firmware of a computing device, executable by a processor. A BIOS may operate or execute prior to the execution of the OS of a computing device. A 10 BIOS may initialize, control, or operate components such as hardware components of a computing device and may load or boot the OS of computing device. In some examples, a BIOS may provide or establish an interface between hardware devices or platform firmware of the computing device and an OS of the computing device, via which the OS of the computing device may control or operate hardware devices or platform firmware of the 15 computing device. In some examples, a BIOS may implement the Unified Extensible Firmware Interface (UEFI) specification or another specification or standard for initializing, controlling, or operating a computing device.
[0035] A micro virtual machine can be used to isolate a potentially untrusted process from the computer's host operating system and from applications running within other virtual 20 machines. Each micro virtual machine may be used to run a limited number of applications at one time or a single application or even a single task within an application, with the execution of applications and tasks in one micro virtual machine being isolated from other virtual machines running on the same device or system. Many micro virtual machines may be run at one time in order to compartmentalize the execution of applications and/or other 25 processes running in the computing device. This can provide enhanced security by reducing the potential for contamination between executing processes on separate micro VMs, and by containing untrusted operations. In the micro-virtualisation approach., the micro virtual machines are lightweight virtual machines that can be created, maintained and terminated on-demand, and may exist for a limited time while the application within the micro virtual 30 machine is running, before being terminated when their intended purpose is complete. In an example environment allowing simultaneous execution of applications within a plurality of isolated virtual machines, any code execution which may be potentially malicious can be
SUBSTITUTE SHEET (RULE 26)
contained within its own micro virtual machine, which is then destroyed after its intended use is completed or upon identification of memory tampering, thereby disallowing malicious code from effecting any lasting change to a computing device or the network. In an example, the micro virtual machine may run a local application or an individual web page session. When 5 a user-initiated operation is completed, such as running a local application or when navigating away from a web page to another page with a different Internet URL domain, the corresponding micro virtual machine can be destroyed. Any new local application or web application can then be run inside a new separate micro virtual machine that may be cloned from a micro virtual machine master template. Thus, if there has been any potential 10 compromise to security within any individual micro virtual machine as a consequence of execution of some malicious code, the adverse effects of the security breach are isolated from any system resources outside the affected micro virtual machine; and the effects of the security breach can be removed when the micro virtual machine is destroyed.
[0036] The combination of the detection of memory tampering as described herein, and the 15 isolation of operations within a plurality of virtual machines in the described virtual machine environment may allow enhanced security on a computing device. For example, malicious activity may be carried out in an application that is running on a virtual machine that is only given access to limited system resources, and is isolated from applications running in other virtual machines on the same device. The code running inside any virtual machine may be 20 highly controlled and kept separate from processes of the underlying operating system. This may enhance malware detection and resolution because modifications to code accessible by the virtual machine can be contained and removed without compromising the execution of other processes running within another virtual machine. In a device that supports the creation of multiple lightweight micro virtual machines, the isolation of processes makes it 25 possible to quickly and efficiently terminate a potentially malicious operation, or to quarantine relevant processes, without affecting other processes running in other virtual machines. The isolation between processes combined with the efficiency of process termination enables a rapid reaction to a potentially malicious operation, and reduces the motivation to collect additional behavioural indicators before taking action. In addition, the isolation between 30 virtual machines also reduces the potential consequences of malicious operations that are performed within any single virtual machine. Therefore, the described solution for detection of memory tampering is well suited to a virtualised environment in which lightweight micro
SUBSTITUTE SHEET (RULE 26)
virtual machines are created for new operations in advance of determining whether or not they are trusted or potentially malicious operations. The isolation between virtual machines may also facilitate the detection of malicious activity, because of the limited number of processes and resources to be considered within any one virtual machine environment. For 5 example, if no applications within the virtual machine are expected to modify the in-memory code accessible by the virtual machine, any detected modifications may indicate potentially malicious activity.
[0037] With reference to FIG.4, there is shown an example process 400for identifying within a virtual machine whether a function call is potentially malicious, and for resolving identified 10 risks. For example, modifications may be made to memory accessible to the virtual machine as a result of a function call within the virtual machine. In this example, a computing device may receive a request to run an application at 401. In response to this request, a virtual machine may be generated in the computing device at 402. For example, a processor may be configured to generate a virtual machine in response to initiation of certain types of 15 operation - such as in response to each system call that requires a function within the operating system kernel of the computing device. At 403 a processor may identify whether the function call within the virtual machine is potentially malicious. For example, a function call may be identified as potentially malicious in accordance with any of the examples described above. In some examples the virtual machine may be a micro virtual machine, 20 and many of these micro virtual machines may be running simultaneously. At 404, the process reports the identification of a potentially malicious operation and terminates the operation and/or quarantines the process that initiated the operation. The reporting allows collection of data for operations identified as malicious or potentially malicious, and processing for visualisation and analysis.
25 [0038] With reference to FIG. 5, there is shown an example computing device 500 comprising a hardware and BIOS layer that in this example includes the processor 501. Processor 501 and a host operating system 502 control major operations of the computing device 500. The computing device 500 may be a computing device running a hypervisor 503 that is capable of creating and managing virtual machines 504, 505. User-initiated operations 30 within application programs 506, 507, 508 run within a respective one of the virtual machines 504, 505 which each run an operating system emulation 502', 502”. There may be other programs 509 running directly on the host operating system 502. Non-limiting, illustrative
SUBSTITUTE SHEET (RULE 26)
examples of computing device 500 include a PC, a laptop computer, a tablet computer, a cell phone, a personal digital assistant (PDA), and the like.
[0039] The hypervisor 503 may be used to create a first virtual machine 504, using services provided by the host operating system 502. For example, the virtual machine may be 5 generated in response to a request for an application to run, and the hypervisor allocates required system resources. The application 506, 507 and an emulated operating system instance 502* within the virtual machine 504 run in isolation from the host operating system 502 of computing device 500. In some examples, the virtual machine may be a micro virtual machine that is created by a Microvisor. The Microvisor is a hypervisor that is adapted to 10 use hardware virtualisation support of the computing device on which the Microvisor runs and to create virtual machines that are tailored to support a particular task, with only the required system resources being allocated to each micro virtual machine. A micro virtual machine can be created for each new application-level operation that has potential vulnerabilities, such as a user-selection of a browser tab or email attachment. In some 15 examples, a virtual machine is created and destroyed as soon as the relevant task is complete; in other examples, the virtual machine may remain in a suspended state following its creation, until an application is to be opened, and then the virtual machine 504, 505 is reactivated. An application running in the virtual machine may make a function call to request services of the emulated operating system 502’ within the same virtual machine. For 20 example, the function call may have a target address within a region of memory that is accessible to the virtual machine, and the function call may lead to a modification of inmemory code at this target address. As described above, the processor may be running a malware detection process within the hypervisor (which may be a Microvisor) or as a detection process running within each virtual machine, to identify whether each function call 25 is potentially malicious. For example, the malware detection process may identify tampering of memory accessible by the micro virtual machine. Identifying whether a function call is potentially malicious may be performed in accordance with the process described in FIG. 4.
[0040] An example computing device creates a micro virtual machine for execution of an operation, which may be an operation type having a security vulnerability. A process running 30 on the device detects requests for in-memory code modifications as part of the operation running within the virtual machine, and identifies attempts at memory tampering by: monitoring a region of memory accessed via the virtual machine, the monitored region
SUBSTITUTE SHEET (RULE 26)
corresponding to an address space for a set of protected functions; determining, when a request is made for modification of the memory, whether a target address of the request is within the monitored region; and identifying the requested modification as potentially malicious if the target address of the request is within the monitored region; memory 5 protection at the target address has been modified; and a retur address of the request is outside a trusted region. If the operation is identified as malicious, the virtual machine can be terminated. The combination of highly granular task-specific malware detection, based on target addresses and return addresses before operations are completed, with highly granular micro virtual machines is advantageous to mitigate security risks.
10 [0041] A micro virtual machine may be disposable, wherein it may be created, maintained, and destroyed on-demand. Such virtual machines may exist for a limited time that an application is running within them. For example, the virtual machine may be destroyed once an application is terminated. A plurality of virtual machines may be running concurrently on the computing device 500. For example, a plurality of micro virtual machines may run 15 concurrently. Different applications may be running in different virtual machines on the computing device. In some examples, respective operating system images of the computing device 500 at the time of creation of the virtual machines may also be created for running in the respective virtual machines. Therefore, a virtual machine may possess its own instance or copy of the operating system, which is isolated and separate from the main operating 20 system executing within the computing device 500.
[0042] With reference to FIG. 6, there is shown an example device 600 comprising a processor 601 and a random access memory 602, a read only memory 603 and a storage device 604. A communication interface 605 provides communication via a network link 606. The device’s main processing hardware is connected to a display device 607 and an input
25 device 608. In some examples, the computing device 600 may form part of a larger multiprocessor computer system. In some examples, the processor may be configured by software such as detection code within the above-described hypervisor to determine whether a function call is potentially malicious. For example, the processor may be configured to detect whether potentially modifications have been made to the memory 602.
30 Identification of potentially malicious function calls may be performed in accordance with any of the examples described above.
SUBSTITUTE SHEET (RULE 26)
[0043] FIG. 7 shows an example of a computer readable medium 702, which is a non- transitory storage medium storing instructions 710, 711, 712, 713 that, when executed by a processor 700 coupled to a memory 701, cause the processor 700 to identify whether a function call is potentially malicious in accordance with the examples described above. The 5 term “non-transitory storage medium” does not encompass transitory propagating signals. In some examples, the executable instructions may cause the processor to generate a virtual machine. The computer readable medium may cause the processor to identify whether a requested operation within the micro virtual machine is potentially malicious. For example, the processor may identify a potentially malicious function call in accordance with the 10 process described with respect to the FIG. 4. The computer readable medium 702 may be any form of storage device capable of storing executable instructions, such as a nontransient computer readable medium, for example Random Access Memory (RAM), Electrically-Erasable Programmable Read-Only Memory (EEPROM), a storage drive, an optical disc, or the like.
SUBSTITUTE SHEET (RULE 26)
Claims
1. A computing device comprising: a memory; and
5 a processor to: monitor a region of the memory, wherein the monitored region corresponds to an address space for a set of protected functions; determine, in response to a request for a code modification, whether a target address of the requested modification is within the monitored region; and
10 identify the request for code modification as potentially malicious when: the target address of the request for modification is within the monitored region; and a retur address of the request for modification is outside a trusted region of the memory.
15
2. A computing device according to claim 1, wherein a request for a code modification is identified as potentially malicious when the requested code modification includes a modification of memory protection at the target address.
20 3. A computing device according to claim 1, wherein the processor is to: check whether a write or execute permission is added or reset; in response to detection of a write or execute permission being added or reset, check whether the return address of the request for modification lies in a trusted region of memory; and
25 allow the modification when the retur address lies in the trusted region; or disallow the modification when the return address lies outside the trusted region.
4. A computing device according to claim 3, wherein disallowing the modification 30 comprises discarding the contents of a respective monitored region of memory and generating an alert.
SUBSTITUTE SHEET (RULE 26)
5. A computing device according to claim 1 , wherein the trusted region of memory comprises a region of memory into which a trusted function has been loaded or which is dynamically allocated to the trusted function.
5 6. A computing device according to claim 1, wherein the set of protected functions comprise application programming interface (API) functions and/or dynamic link library (DLL) functions, and the monitoring comprises monitoring a list of APIs and monitoring for modifications to a DLL address space.
10 7. A computing device according to claim 6, wherein the computing device is to store a
DLL base address, a DLL start address, a DLL end address, and a DLL module name.
8. A non-transitory computer-readable medium comprising instructions that, when executed, cause a processor of a computing device to:
15 (i) monitor a region of memory of the computing device corresponding to a memory address space of a set of protected functions;
(ii) identify a request for a code modification having a target address within the monitored region; and
(iii) identify the code modification as potentially malicious when the target address 20 of the identified modification is within the monitored region and the code modification is initiated from outside a trusted region of the memory.
9. A computer-readable medium according to claim 8, wherein the instructions, when executed, further cause the processor to:
25 check whether a write or execute permission is added or reset; check whether the return address lies in the trusted region of memory, in response to detection of a write or execute permission being added or reset; and either: allow the modification when the retur address lies in the trusted region; or
30 disallow the modification when the retur address lies outside the trusted region.
SUBSTITUTE SHEET (RULE 26)
10. A computer-readable medium according to claim 9, wherein disallowing the modification comprises discarding the contents of a respective monitored region of memory and generating an alert.
5 11. A computer-readable medium according to claim 8, wherein the set of protected functions comprise API functions and DLL functions, and the monitoring activity comprises monitoring a list of APIs and monitoring for modifications to a DLL address space.
12. A computing device comprising:
10 a processor to: generate a virtual machine on the computing device; and monitor a region of memory accessed via the virtual machine, wherein the monitored region corresponds to an address space for a set of protected functions; determine, when a request is made for modification of the memory, whether
15 a target address of the request is within the monitored region; and identify the requested modification as potentially malicious when: the target address of the request is within the monitored region; memory protection at the target address has been modified;
20 and a retur address of the request is outside a trusted region.
13. A computing device according to claim 12, wherein the processor is to terminate the virtual machine in response to identifying the function call as potentially malicious.
25
14. A computing device according to claim 12, wherein the virtual machine is generated in response to the processor receiving a request to launch an application, which application is run in the virtual machine.
30
SUBSTITUTE SHEET (RULE 26)
15. A computing device according to claim 12, wherein, via the virtual machine, the processor is to: check whether a write or execute permission is added or reset; check whether the return address lies in the trusted region of memory, in
5 response to detection of a write or execute permission being added or reset; and: allow the modification when the retur address lies in the trusted region; or prevent the modification when the return address lies outside the trusted region.
SUBSTITUTE SHEET (RULE 26)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/US2020/044922 WO2022031275A1 (en) | 2020-08-05 | 2020-08-05 | Detection of memory modification |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/US2020/044922 WO2022031275A1 (en) | 2020-08-05 | 2020-08-05 | Detection of memory modification |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022031275A1 true WO2022031275A1 (en) | 2022-02-10 |
Family
ID=80118436
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2020/044922 WO2022031275A1 (en) | 2020-08-05 | 2020-08-05 | Detection of memory modification |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2022031275A1 (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2691908A2 (en) * | 2011-03-28 | 2014-02-05 | McAfee, Inc. | System and method for virtual machine monitor based anti-malware security |
US20150248557A1 (en) * | 2011-03-31 | 2015-09-03 | Mcafee, Inc. | System and method for below-operating system trapping and securing loading of code into memory |
US20180253369A1 (en) * | 2016-10-11 | 2018-09-06 | Green Hills Software, Inc. | Systems, methods, and devices for vertically integrated instrumentation and trace reconstruction |
-
2020
- 2020-08-05 WO PCT/US2020/044922 patent/WO2022031275A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2691908A2 (en) * | 2011-03-28 | 2014-02-05 | McAfee, Inc. | System and method for virtual machine monitor based anti-malware security |
US20150248557A1 (en) * | 2011-03-31 | 2015-09-03 | Mcafee, Inc. | System and method for below-operating system trapping and securing loading of code into memory |
US20180253369A1 (en) * | 2016-10-11 | 2018-09-06 | Green Hills Software, Inc. | Systems, methods, and devices for vertically integrated instrumentation and trace reconstruction |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10949247B2 (en) | Systems and methods for auditing a virtual machine | |
US9202046B2 (en) | Systems and methods for executing arbitrary applications in secure environments | |
KR101946982B1 (en) | Process Evaluation for Malware Detection in Virtual Machines | |
RU2679175C1 (en) | Method of behavioral detection of malicious programs using a virtual interpreter machine | |
EP3123311B1 (en) | Malicious code protection for computer systems based on process modification | |
US8904537B2 (en) | Malware detection | |
KR102189296B1 (en) | Event filtering for virtual machine security applications | |
CN102799817B (en) | For the system and method using Intel Virtualization Technology to carry out malware protection | |
US7984304B1 (en) | Dynamic verification of validity of executable code | |
US8826269B2 (en) | Annotating virtual application processes | |
US10296470B2 (en) | Systems and methods for dynamically protecting a stack from below the operating system | |
US9507727B2 (en) | Page fault injection in virtual machines | |
JP2010517164A (en) | Protect operating system resources | |
AU2014219466A1 (en) | Memory introspection engine for integrity protection of virtual machines | |
Wu et al. | System call redirection: A practical approach to meeting real-world virtual machine introspection needs | |
JP2012198927A (en) | Protection agents and privilege modes | |
CN109074321B (en) | Method and system for protecting memory of virtual computing instance | |
Ding et al. | HyperVerify: A VM-assisted architecture for monitoring hypervisor non-control data | |
US20150379265A1 (en) | Systems And Methods For Preventing Code Injection In Virtualized Environments | |
WO2022031275A1 (en) | Detection of memory modification | |
Suzaki et al. | Kernel memory protection by an insertable hypervisor which has VM introspection and stealth breakpoints | |
Zaidenberg et al. | Hypervisor memory introspection and hypervisor based malware honeypot | |
US20240362049A1 (en) | Using virtual machine privilege levels to control write access to kernel memory in a virtual machine | |
Okuda et al. | Hiding Communication of Essential Services by System Call Proxy | |
Ibrahim et al. | Virtual Machines Security in IaaS Platform |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20948726 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20948726 Country of ref document: EP Kind code of ref document: A1 |