Docket No.: 020455/WO TITLE OF THE INVENTION SYSTEMS AND METHODS FOR MOTION TRACKING CROSS-REFERENCE TO RELATED APPLICATIONS This application claims priority from U.S. Provisional Application Serial No. 63/481,850 filed on January 27, 2023, which is incorporated herein by reference in its entirety. STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT This invention was made with government support under EB025209 and EB032684 awarded by the National Institutes of Health. The government has certain rights in the invention. MATERIAL INCORPORATED-BY-REFERENCE Not applicable. FIELD OF THE INVENTION The present disclosure generally relates to methods and apparatuses for motion tracking. BACKGROUND OF THE INVENTION Motion artifacts, from such sources as heartbeats, respiration, or peristalsis, often degrade images or videos of live subjects. During confocal or two-photon microscopy as typically conducted in neuroscience studies, sample motion artifacts hinder researchers in visualizing micrometer-level structures and recording long- period time-lapse data. Moreover, since any biomedical imaging system has a confined field of view (FOV), severe sample motion can shift the target region out of the FOV during the image acquisition. Motion correction algorithms have been proposed to reduce and compensate the artifacts based on the acquired data. However, these solutions may require multiple volumetric scans and the post- processing approach precludes real-time correction if the sample is moving out of
Docket No.: 020455/WO the FOV during long-period acquisition. Specialized holders have been developed to limit the subject’s activity and keep the target within the FOV, but such artificial shackles can also change the behavior of living subjects. One alternative is motion tracking, which can provide feedback to the imaging system’s control, adaptively following the subject within the FOV and decreasing or even eliminating motion artifacts. Several existing motion tracking algorithms are currently used for various applications, such as autonomous driving, but only a few can be used at the microscopic level. An ideal motion tracking system for microscopy applications should provide the high spatial and temporal resolution needed to achieve high measurement accuracy without constraining the sample or the imaging system. Among many imaging modalities, optical coherence tomography (OCT), a non-invasive, label-free, real-time, three-dimensional (3D) imaging technique, is a promising candidate. Previous methods for tracking motion in OCT relied on speckle decorrelation analysis. One existing method made use of an essential relationship between the displacement of two axial scans (A-scans) and their cross-correlation coefficient (XCC). Since the speckle size is on the same order of magnitude as the OCT resolution, speckle decorrelation analysis is more accurate than conventional frame- to-frame position tracking. Speckle decorrelation analysis uses the principles of OCT signal formation and detection to connect two A-scans’ displacement and their XCC. The signal formation for each A-scan may be derived from the sample scattering distribution and the system’s point spread function. The XCC of two A- scans is calculated using their signal intensity profiles. Consequently, the value of XCC can be used to estimate the position displacement between two A-scans. Existing motion tracking methods based on speckle decorrelation analysis have been applied to track the nonconstant beam scanning speed of an OCT system and to correct motion artifacts caused by one-dimensional (1D) non-uniform probe scanning in hand-held OCT. To track two-dimensional (2D) transverse motion, one existing method scanned the sample circularly, capturing both the magnitude and the direction of the motion. However, a previous study related to the
Docket No.: 020455/WO existing motion tracking method reported the speed of the sample motion only within a range from 0.2 mm/s to 1.4 mm/s, a change of less than one order of magnitude. However, typical sample motion often varies from the micro to the macro level, and tracking such motion requires a motion tracking method with a high dynamic range. SUMMARY OF THE INVENTION Among the various aspects of the present disclosure is the provision of a system and method for motion tracking. Briefly, therefore, the present disclosure is directed to methods and systems for motion tracking with image analysis. In one aspect, a system to track the 3D motion of an object within the field of view of an imaging device is disclosed that includes a computing device operatively coupled to the imaging device. The computing device includes at least one processer configured to operate the imaging device to obtain a plurality of image datasets, receive the plurality of image datasets, and calculate the motion of the object within the field of view of the imaging device based on a comparison of at least a pair of successive image datasets. Each image dataset includes a series of images at discrete user-defined positions along a user-defined circular scan trajectory. In some aspects, calculating the motion of the object includes at least one of performing an interframe analysis to track an object's motion magnitude and direction based on intersected regions detected between the pair of successive image datasets, wherein the object motion magnitude ranges from about 1 micrometer/second to about 1 mm/sec; and performing an intraframe analysis to track the object motion magnitude and direction based on the displacements between corresponding pairs of images obtained from the pair of successive image datasets wherein the object motion magnitude ranges from about 1 mm/second to about 1 cm/sec. In some aspects, the imaging device includes an optical coherence tomography imaging device, an MR device, or a computed tomography device. In some aspects, the imaging device includes a laser, a spectrometer, a fiber coupler, a reference beam, and a sample beam that irradiates a sample. In some aspects,
Docket No.: 020455/WO the system is further configured to provide active motion compensation via feedback control. In some aspects, the imaging data includes microscopic imaging data. In some aspects, the imaging data includes macroscopic imaging data. The present teachings include systems to track the motion of a sample in 3D. In some aspects, the system can include an imaging device to acquire imaging data, a computing device, and an algorithm to calculate motion from the imaging data. In some embodiments, the imaging device can include, but not be limited to an optical coherence tomography imaging device, an MRI device, or a computed tomography device. In some embodiments, the imaging device can include a laser, spectrometer, fiber coupler, reference beam, and sample beam that irradiates a sample. In some aspects, the system can track transverse speed motions between several micrometers per second and several centimeters per second. In another aspect, the system can track axial speed motions between several micrometers per second and several millimeters per second. In another aspect, the imaging system can perform circular scans over the sample. In yet another aspect, the magnitude and direction of sample motion are estimated by interframe and intraframe analysis. In some embodiments, the algorithm can estimate the magnitude and direction of sample motion. In another aspect, the interframe analysis focuses on the intersection region between adjacent circular scans. In yet another aspect, the intraframe analysis focuses on the displacement between adjacent A-scans within one circular scan. In some embodiments, the system provides active motion compensation via feedback control. In some embodiments, the imaging data is microscopic imaging data. In other embodiments, the imaging data is macroscopic imaging data. The present teachings also include methods for estimating 3D motion from imaging data. In some aspects, the method can include acquiring imaging data from a sample with an imaging device, analyzing imaging data with a computing device, and estimating motion with an algorithm. In accordance with another aspect, circular scans can be performed to acquire imaging data. In some embodiments, the imaging device can be an optical coherence tomography imaging device, an MRI
Docket No.: 020455/WO device, or a computed tomography device. In accordance with yet another aspect, interframe and intraframe analysis can be performed on the imaging data. In one aspect, the interframe analysis focuses on the intersection region between adjacent circular scans. In another aspect, intraframe analysis focuses on the displacement between adjacent A-scans within one circular scan. In yet another aspect, estimating motion with the algorithm provides active motion compensation via feedback control. Other objects and features will be in part apparent and in part pointed out hereinafter. DESCRIPTION OF THE DRAWINGS FIG.1 is a block diagram schematically illustrating a system in accordance with one aspect of the disclosure. FIG.2 is a block diagram schematically illustrating a computing device in accordance with one aspect of the disclosure. FIG.3 is a block diagram schematically illustrating a remote or user computing device in accordance with one aspect of the disclosure. FIG.4 is a block diagram schematically illustrating a server system in accordance with one aspect of the disclosure. FIG.5A contains a top view of a circular scan (darker gray) on a sample (lighter gray) when there is no sample motion in accordance with one aspect of the invention. FIG.5B contains a top view of the circular scan beam trace left on the sample when the sample is moving at the speed of v
m with angle α in accordance with one aspect of the invention. The light gray represents the position before the sample moved, and the dark gray represents the sample’s position after moving. FIG.5C contains beam traces from two successive circular scans when the sample moves along a positive x-axis in accordance with one aspect of the invention. The lighter gray/rightmost cycloid represents the first circular scan, and
Docket No.: 020455/WO the darker gray/leftmost cycloid represents the second circular scan. The darker gray/leftmost and lighter gray/rightmost beam spots in the black dashed rectangular are the intersected region between two successive circular scans. FIG 5D contains a single circular scan beam trace for intraframe analysis in accordance with one aspect of the invention. The darker gray and lighter gray beam spots in the black dashed rectangular are adjacent A-scans. FIG.6A contains a flowchart of axial motion tracking in accordance with one aspect of the invention. FIG.6B contains a first (dashed line) and second or following (solid line) cross-sectional images in accordance with one aspect of the invention. FIG.6C contains a first cross section image with a rectangle denoting a selected ROI. FIG.6D contains a second cross section image superimposed with the rectangle denoting the ROI selected from FIG.6C. FIG.7 contains a single-beam OCT system schematic for motion tracking in accordance with one aspect of the invention. FIG.8 contains a multi-beam motion tracking system with a parallel OCT in accordance with one aspect of the invention. Combining motion measured concurrently at each sample location, additional information characterizing the sample motion can be calculated. For example, if the sample is a rigid body, rotation of the sample along the x, y, and z-axis (pitch, yaw, and roll) can be measured from the 3D motion of each sample point. For a non-rigid sample, the deformation of the sample at different locations can be measured by combining 3D motion measured at each sample point. FIG.9A contains a multi-beam distribution pattern with line-like multi-beam circular scans in accordance with one aspect of the invention. FIG.9B contains a multi-beam distribution pattern with cross-like multi-beam circular scans in accordance with one aspect of the invention.
Docket No.: 020455/WO FIG.9C contains a multi-beam distribution pattern with grid-like multi-beam circular scans in accordance with one aspect of the invention. FIG.10A contains a graph summarizing an XCC sensitivity calibration using the relationship between NA and XCC in accordance with one aspect of the invention. FIG.10B contains a graph summarizing an XCC sensitivity calibration with overlap d
2 between different A-scans of different samples in accordance with one aspect of the invention. FIG.10C contains a graph summarizing an XCC sensitivity calibration with the essence of the XCC noise floor of different samples in accordance with one aspect of the invention. FIG.11A contains a cropped cross-section image of an onion in the first circular scan in accordance with one aspect of the invention. FIG.11B contains a cropped cross-section image of the onion of FIG.11A in the second circular scan in accordance with one aspect of the invention. FIG.11C contains a graph summarizing max XCC values for each i-th A- scan among all j-th A-scan in accordance with one aspect of the invention. FIG.11D contains a graph summarizing the detected intersection index offset ε
∗ based on the data of FIG.11C. FIG.12A contains the verification of the interframe analysis with changing NA in accordance with one aspect of the invention. FIG.12B contains the verification of the interframe analysis with changing R in accordance with one aspect of the invention. FIG.12C contains the verification of the interframe analysis with changing t
e in accordance with one aspect of the invention. FIG.12D contains the verification of the interframe analysis using different circular scan patterns to extract the different detectable speed ranges of the sample motion in accordance with one aspect of the invention.
Docket No.: 020455/WO FIG.12E contains the verification of detecting the direction of the sample motion in accordance with one aspect of the invention. FIG.13A summarizes the experimental results of intraframe analysis with a cross-section image of a chicken breast in one circular scan in accordance with one aspect of the invention. FIG.13B is a zoomed-in image from the left rectangle of FIG.13A. FIG.13C is a zoomed-in image from the right rectangle of FIG.13A. FIG.13D summarizes the experimental results of intraframe analysis with a curve of the values of XCC corresponding to FIG.13A in accordance with one aspect of the invention. FIG.13E summarizes the experimental results of intraframe analysis with the calculated d
2 value based on the values of XCC in FIG.13B, where lighter gray dots are from the experiment result, and the solid darker gray line is fitted sine curve in accordance with one aspect of the invention. FIG.14A contains the verification of the intraframe analysis changing NA in accordance with one aspect of the invention. FIG.14B contains the verification of the intraframe analysis changing R in accordance with one aspect of the invention. FIG.14C contains the verification of the intraframe analysis changing t
e in accordance with one aspect of the invention. FIG.14D contains the verification of the intraframe analysis using different circular scan patterns to extract the different detectable speed ranges of the sample motion in accordance with one aspect of the invention. FIG.14E contains the verification of the intraframe analysis detection of the direction of the sample motion in accordance with one aspect of the invention. FIG.15A contains the tracked and designed motion pattern in accordance with one aspect of the invention.
Docket No.: 020455/WO FIG.15b contains the tracked speed of the sample motion in accordance with one aspect of the invention. FIG.15c contains the tracked angle of the sample motion in accordance with one aspect of the invention. FIG.16A is a cross-section OCT circular scan image of a chicken breast before the Z-stage moving. FIG.16B is a cross-section OCT circular scan image of a chicken breast after Z-stage moving. FIG.16C contains the verification of axial motion tracking with the relationship between the value of the XCC of cropped images and references ROI and the axial pixel index in accordance with one aspect of the invention. FIG.16D contains the verification of axial motion tracking with the experiment and simulation results for different axial sample speeds in accordance with one aspect of the invention. FIG.17A contains the sketch of the mouse and where the circle scan is performed, where the white position is where the hair on the chest has been removed, in accordance with one aspect of the invention. FIG.17B contains the axial motion tracking results showing the mouse’s heart beating in accordance with one aspect of the invention. Those of skill in the art will understand that the drawings, described below, are for illustrative purposes only. The drawings are not intended to limit the scope of the present teachings in any way. DETAILED DESCRIPTION OF THE INVENTION The present disclosure is based, at least in part, on the discovery that 3D motion tracking in an extended speed range can be achieved with optical coherence tomography. As shown herein, methods and systems for motion tracking are
Docket No.: 020455/WO described. The present disclosure provides a motion-tracking device and method that utilizes high-resolution and high-speed optical coherence tomography (OCT) imaging to scan the samples with different patterns. OCT is a label-free, 3D imaging method widely used for biomedical research and clinical applications, such as ophthalmology. The unique feature of the disclosed method is that it can track the sample motion in 3D (unlike camera-based 2D tracking) using a single imaging beam, while capable of measuring additional rotational degrees of freedom (such as yaw, pitch, and roll) with high spatial and temporal resolution (e.g. micrometer level displacement can be tracked at over hundreds of frames per second). The technique can track sample motion directions, displacements, speed, acceleration, etc., and provides a high dynamic range of measurements. It can also be used to provide feedback information for motion correction and sample stabilization. One aspect of the present disclosure provides a 3D motion tracking method to track the sample motion in an extended speed range from several micrometers per second to several centimeters per second. This ability is achieved by controlling the circular scan pattern settings. Moreover, two analysis models for motion tracking in the transverse plane are described and employed. For different estimable speeds of the sample motion, instructions are provided on adequately adjusting the circular scan pattern settings and using different analysis models. In addition, a detailed analysis of extracting the direction information of the sample motion in the transverse plane is provided. On top of this, axial motion tracking is combined with the two analysis models to achieve 3D motion tracking. Several experiments were performed to prove the correctness of our new motion tracking method and demonstrate the method of tracking different designed transverse motion patterns and the movement of the mouse skin in vivo from the mouse under anesthesia. As described herein, a new 3D motion tracking method based on circular scans and speckle decorrelation analysis is employed. Two analysis modes are developed, the intraframe and interframe analysis, to cover a broad detectable range of speeds. Motion tracking was tested with OCT. The experimental results
Docket No.: 020455/WO prove the correctness of the motion tracking method and show its large-scale detectable speed range, which is from several micrometers per second to several centimeters per second. This range of detectable speed is more than three orders of magnitude broader than any other motion tracking method before with OCT and speckle decorrelation analysis. Furthermore, instructions on manipulating the circular scans to reach different speed ranges for tracking the sample motion are provided and discussed. The instructions will be very useful when combining this auxiliary motion tracking system with the primary imaging system. They will help to implement motion tracking through feedback control automatically. The motion tracking may not need to be limited by the OCT system. The method could also be applied to other scanning or sensing systems, such as LIDAR. Extending it to other techniques can extend the range of the detectable speed range as well. The tracking can be able to track meters per second or even kilometers per second by changing different circular scan pattern settings. The application will not only provide feedback control on biomedical imaging systems to compensate for the motion artifacts or track the sample but also can be used in automobile driving or even spacecraft guiding. Overall, this disclosure provides a solution for motion tracking and it can be applied not just to minimize motion artifacts in microscopic images, but also to stabilize samples for precision procedures, such as micro-injection or surgery. The experiments as described in the examples below validated the disclosed 3D motion tracking method based on circular scans and speckle decorrelation analysis. Two analysis modes were validated, the intraframe and interframe analysis, to cover a broad detectable range of speeds. The motion tracking was tested using an OCT imaging system. The experimental results validated the disclosed motion tracking method and showed its large-scale detectable speed range, which is from several micrometers per second to several centimeters per second. This range of detectable speed is more than three orders of magnitude broader than any other motion tracking method before with OCT and speckle decorrelation analysis. Furthermore, the parameters of the circular scans may be manipulated to reach
Docket No.: 020455/WO different speed ranges for tracking the sample motion. The disclosed motion tracking system may be provided as a separate auxiliary motion tracking system configured for use with the primary imaging system. In some aspects, such auxiliary motion tracking system systems may implement motion tracking through feedback control automatically. The motion tracking may not need to be limited by the OCT system. The method could also be applied to other scanning or sensing systems, such as LIDAR. Extending it to other techniques could extend the range of the detectable speed range. One will be able to track meters per second or even kilometers per second by changing different circular scan pattern settings. The application will not only provide feedback control on biomedical imaging systems to compensate for the motion artifacts or track the sample but also can be used in automobile driving or even spacecraft guidance systems. In the 3D motion tracking method, the magnitude and direction of the sample motion are the average values within the data acquisition time. It is one circular scan period for the intraframe analysis in transverse motion tracking and two for the interframe analysis. For axial motion tracking, it is also two circular scan periods. The sample may rotate, and not all A-scans will have the same axial shift as the demonstration of motion tracking on mouse skin. Parallel imaging can be performed on different regions of the sample to get different motion-tracking results. One can know how the sample rotates and deforms in 3D with different magnitude and direction results of the sample motion from parallel motion tracking. Computing Device In various aspects, the disclosed image motion tracking system includes a computing device operatively coupled to an imaging device. The computing device is configured to operate the imaging device to obtain imaging data of a moving object or structure along a series of user-specified circular scanning trajectories as disclosed herein, to perform motion analysis of the imaging data according to the motion tracking methods disclosed herein, to correct images of the object to remove motion artifacts, and to provide feedback control to the imaging device to maintain
Docket No.: 020455/WO the object within the field of view of the imaging device. FIG.1 depicts a simplified block diagram of a computing device 300 for implementing the methods described herein. As illustrated in FIG.1, the computing device 300 may be configured to implement at least a portion of the tasks associated with the method of tracking the motion of an object within the field of view of an imaging system based on analysis of imaging data obtained using the imaging device 310. The computer system 300 may include a computing device 302. In one aspect, the computing device 302 is part of a server system 304, which also includes a database server 306. The computing device 302 is in communication with database 308 through the database server 306. The computing device 302 is communicably coupled to imaging device 310 and a user computing device 330 through a network 350. Network 350 may be any network that allows local area or wide area communication between the devices. For example, network 350 may allow communicative coupling to the Internet through at least one of many interfaces including, but not limited to, at least one of a network, such as the Internet, a local area network (LAN), a wide area network (WAN), an integrated services digital network (ISDN), a dial-up-connection, a digital subscriber line (DSL), a cellular phone connection, and a cable modem. The user computing device 330 may be any device capable of accessing the Internet including, but not limited to, a desktop computer, a laptop computer, a personal digital assistant (PDA), a cellular phone, a smartphone, a tablet, a phablet, wearable electronics, smartwatch, or other web-based connectable equipment or mobile devices. In other aspects, the computing device 302 is configured to perform a plurality of tasks associated with the disclosed method tracking the motion of an object within the field of view of the imaging device as disclosed herein. FIG.2 depicts a component configuration 400 of computing device 402, which includes database 410 along with other related computing components. In some aspects, computing device 402 is similar to computing device 302 (shown in FIG.1). A user 404 may access components of computing device 402. In some aspects, database 410 is similar to database 308 (shown in FIG.1).
Docket No.: 020455/WO In one aspect, database 410 includes motion analysis data 418, image motion data 420, and imaging data 412. Non-limiting examples of suitable image motion data 420 include any values of parameters defining the motion of the image within the field of view of the imaging device including, but not limited to, translations and rotations of the object and/or surrounding sample within the field of view in the lateral direction, defined herein as motion within the focal plane or equivalent of the imaging device, and motion in the axial direction, defined here as motion perpendicular to the focal plane or equivalent of the imaging device. In one aspect, imaging data 412 includes any values defining the series of images obtained at discrete points along a circular scan trajectory as described herein. In one aspect, the motion analysis data 418 includes any values defining the equations or algorithms used to implement the motion tracking as disclosed herein, including interframe analysis, intraframe analysis, and axial motion analysis. Computing device 402 also includes a number of components that perform specific tasks. In the exemplary aspect, computing device 402 includes data storage device 430, imaging component 440, motion analysis component 450, and communication component 460. Data storage device 430 is configured to store data received or generated by computing device 402, such as any of the data stored in database 410 or any outputs of processes implemented by any component of computing device 402. The imaging component 440 is configured to operate or produce signals configured to operate the imaging device 310 (FIG.1) to obtain imaging data along a series of circular scan trajectories and optionally to adjust the position of a sample within the field of view of the imaging device to maintain the sample within the imaging device’s field of view. The motion analysis component 450 is configured to estimate the motion of an object within the field of view of the imaging device based on imaging data 412 obtained using the imaging device as operated under the control of imaging component 440. Imaging component 440 is configured to operate the imaging device to obtain the imaging data 412 used by the motion analysis component 450 to determine the motion of the object within the field of view of the imaging device. In some aspects,
Docket No.: 020455/WO the imaging component 440 is further configured to operate the imaging device to compensate for the motion of the object to maintain the object within the field of view of the imaging device based on feedback provided from the image motion data 420 generated by the motion analysis component 450. Communication component 460 is configured to enable communications between computing device 402 and other devices (e.g. user computing device 330 and imaging device 310, shown in FIG.1) over a network, such as network 350 (shown in FIG.1), or a plurality of network connections using predefined network protocols such as TCP/IP (Transmission Control Protocol/Internet Protocol). FIG.3 depicts a configuration of a remote or user computing device 502, such as user computing device 330 (shown in FIG.1). Computing device 502 may include a processor 505 for executing instructions. In some aspects, executable instructions may be stored in a memory area 510. Processor 505 may include one or more processing units (e.g., in a multi-core configuration). Memory area 510 may be any device allowing information such as executable instructions and/or other data to be stored and retrieved. Memory area 510 may include one or more computer-readable media. Computing device 502 may also include at least one media output component 515 for presenting information to user 501. Media output component 515 may be any component capable of conveying information to user 501. In some aspects, media output component 515 may include an output adapter, such as a video adapter and/or an audio adapter. An output adapter may be operatively coupled to processor 505 and operatively coupleable to an output device such as a display device (e.g., a liquid crystal display (LCD), organic light emitting diode (OLED) display, cathode ray tube (CRT), or “electronic ink” display) or an audio output device (e.g., a speaker or headphones). In some aspects, media output component 515 may be configured to present an interactive user interface (e.g., a web browser or client application) to user 501. In some aspects, computing device 502 may include an input device 520 for receiving input from user 501. Input device 520 may include, for example, a
Docket No.: 020455/WO keyboard, a pointing device, a mouse, a stylus, a touch-sensitive panel (e.g., a touchpad or a touch screen), a camera, a gyroscope, an accelerometer, a position detector, and/or an audio input device. A single component such as a touch screen may function as both an output device of media output component 515 and input device 520. Computing device 502 may also include a communication interface 525, which may be communicatively coupleable to a remote device. Communication interface 525 may include, for example, a wired or wireless network adapter or a wireless data transceiver for use with a mobile phone network (e.g., Global System for Mobile communications (GSM), 3G, 4G or Bluetooth) or other mobile data network (e.g., Worldwide Interoperability for Microwave Access (WIMAX)). Stored in memory area 510 are, for example, computer-readable instructions for providing a user interface to user 501 via media output component 515 and, optionally, receiving and processing input from input device 520. A user interface may include, among other possibilities, a web browser and client application. Web browsers enable users 501 to display and interact with media and other information typically embedded on a web page or a website from a web server. A client application allows users 501 to interact with a server application associated with, for example, a vendor or business. FIG.4 illustrates an example configuration of a server system 602. Server system 602 may include, but is not limited to, database server 306 and computing device 302 (both shown in FIG.1). In some aspects, server system 602 is similar to server system 304 (shown in FIG.1). Server system 602 may include a processor 605 for executing instructions. Instructions may be stored in a memory area 625, for example. Processor 605 may include one or more processing units (e.g., in a multi- core configuration). Processor 605 may be operatively coupled to a communication interface 615 such that server system 602 may be capable of communicating with a remote device such as user computing device 330 (shown in FIG.1) or another server system 602. For example, communication interface 615 may receive requests from
Docket No.: 020455/WO user computing device 330 via network 350 (shown in FIG.1). Processor 605 may also be operatively coupled to a storage device 625. Storage device 625 may be any computer-operated hardware suitable for storing and/or retrieving data. In some aspects, storage device 625 may be integrated into server system 602. For example, server system 602 may include one or more hard disk drives as storage device 625. In other aspects, storage device 625 may be external to server system 602 and may be accessed by a plurality of server systems 602. For example, storage device 625 may include multiple storage units such as hard disks or solid-state disks in a redundant array of inexpensive disks (RAID) configuration. Storage device 625 may include a storage area network (SAN) and/or a network attached storage (NAS) system. In some aspects, processor 605 may be operatively coupled to storage device 625 via a storage interface 620. Storage interface 620 may be any component capable of providing processor 605 with access to storage device 625. Storage interface 620 may include, for example, an Advanced Technology Attachment (ATA) adapter, a Serial ATA (SATA) adapter, a Small Computer System Interface (SCSI) adapter, a RAID controller, a SAN adapter, a network adapter, and/or any component providing processor 605 with access to storage device 625. Memory areas 510 (shown in FIG.3) and 610 may include, but are not limited to, random access memory (RAM) such as dynamic RAM (DRAM) or static RAM (SRAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), and non-volatile RAM (NVRAM). The above memory types are examples only and are thus not limiting as to the types of memory usable for storage of a computer program. The computer systems and computer-implemented methods discussed herein may include additional, less, or alternate actions and/or functionalities, including those discussed elsewhere herein. The computer systems may include or be implemented via computer-executable instructions stored on non-transitory computer-readable media. The methods may be implemented via one or more local
Docket No.: 020455/WO or remote processors, transceivers, servers, and/or sensors (such as processors, transceivers, servers, and/or sensors mounted on vehicle or mobile devices, or associated with smart infrastructure or remote servers), and/or via computer- executable instructions stored on non-transitory computer-readable media or medium. In some aspects, a computing device is configured to implement machine learning, such that the computing device “learns” to analyze, organize, and/or process data without being explicitly programmed. Machine learning may be implemented through machine learning (ML) methods and algorithms. In one aspect, a machine learning (ML) module is configured to implement ML methods and algorithms. In some aspects, ML methods and algorithms are applied to data inputs and generate machine learning (ML) outputs. Data inputs may include but are not limited to: images or frames of a video, object characteristics, and object categorizations. Data inputs may further include sensor data, image data, video data, telematics data, authentication data, authorization data, security data, mobile device data, geolocation information, transaction data, personal identification data, financial data, usage data, weather pattern data, “big data” sets, and/or user preference data. ML outputs may include but are not limited to a tracked shape output, categorization of an object, categorization of a type of motion, a diagnosis based on the motion of an object, motion analysis of an object, and trained model parameters ML outputs may further include: speech recognition, image or video recognition, medical diagnoses, statistical or financial models, autonomous vehicle decision-making models, robotics behavior modeling, fraud detection analysis, user recommendations and personalization, game AI, skill acquisition, targeted marketing, big data visualization, weather forecasting, and/or information extracted about a computer device, a user, a home, a vehicle, or a party of a transaction. In some aspects, data inputs may include certain ML outputs. In some aspects, at least one of a plurality of ML methods and algorithms may be applied, which may include but are not limited to genetic algorithms, linear or logistic regressions, instance-based algorithms, regularization algorithms,
Docket No.: 020455/WO decision trees, Bayesian networks, cluster analysis, association rule learning, artificial neural networks, deep learning, dimensionality reduction, and support vector machines. In various aspects, the implemented ML methods and algorithms are directed toward at least one of a plurality of categorizations of machine learning, such as supervised learning, unsupervised learning, and reinforcement learning. A control sample or a reference sample as described herein can be a sample from a healthy subject. A reference value can be used in place of a control or reference sample, which was previously obtained from a healthy subject or a group of healthy subjects. A control sample or a reference sample can also be a sample with a known amount of a detectable compound or a spiked sample. The methods and algorithms of the disclosure may be enclosed in a controller or processor. Furthermore, methods and algorithms of the present disclosure, can be embodied as a computer-implemented method or methods for performing such computer-implemented method or methods, and can also be embodied in the form of a tangible or non-transitory computer-readable storage medium containing a computer program or other machine-readable instructions (herein “computer program”), wherein when the computer program is loaded into a computer or other processor (herein “computer”) and/or is executed by the computer, the computer becomes an apparatus for practicing the method or methods. Storage media for containing such computer programs include, for example, floppy disks and diskettes, compact disk (CD)-ROMs (whether or not writeable), DVD digital disks, RAM and ROM memories, computer hard drives and back-up drives, external hard drives, “thumb” drives, and any other storage medium readable by a computer. The method or methods can also be embodied in the form of a computer program, for example, whether stored in a storage medium or transmitted over a transmission medium such as electrical conductors, fiber optics or other light conductors, or by electromagnetic radiation, wherein when the computer program is loaded into a computer and/or is executed by the computer, the computer becomes an apparatus for practicing the method or methods. The method or methods may be implemented on a general purpose microprocessor or
Docket No.: 020455/WO on a digital processor specifically configured to practice the process or processes. When a general-purpose microprocessor is employed, the computer program code configures the circuitry of the microprocessor to create specific logic circuit arrangements. Storage medium readable by a computer includes medium being readable by a computer per se or by another machine that reads the computer instructions for providing those instructions to a computer for controlling its operation. Such machines may include, for example, machines for reading the storage media mentioned above. Compositions and methods described herein utilizing molecular biology protocols can be according to a variety of standard techniques known to the art (see e.g., Sambrook and Russel (2006) Condensed Protocols from Molecular Cloning: A Laboratory Manual, Cold Spring Harbor Laboratory Press, ISBN-10: 0879697717; Ausubel et al. (2002) Short Protocols in Molecular Biology, 5th ed., Current Protocols, ISBN-10: 0471250929; Sambrook and Russel (2001) Molecular Cloning: A Laboratory Manual, 3d ed., Cold Spring Harbor Laboratory Press, ISBN-10: 0879695773; Elhai, J. and Wolk, C. P.1988. Methods in Enzymology 167, 747-754; Studier (2005) Protein Expr Purif.41(1), 207–234; Gellissen, ed. (2005) Production of Recombinant Proteins: Novel Microbial and Eukaryotic Expression Systems, Wiley-VCH, ISBN-10: 3527310363; Baneyx (2004) Protein Expression Technologies, Taylor & Francis, ISBN-10: 0954523253). Definitions and methods described herein are provided to better define the present disclosure and to guide those of ordinary skill in the art in the practice of the present disclosure. Unless otherwise noted, terms are to be understood according to conventional usage by those of ordinary skill in the relevant art. In some embodiments, numbers expressing quantities of ingredients, properties such as molecular weight, reaction conditions, and so forth, used to describe and claim certain embodiments of the present disclosure are to be understood as being modified in some instances by the term “about.” In some embodiments, the term “about” is used to indicate that a value includes the standard deviation of the mean for the device or method being employed to determine the
Docket No.: 020455/WO value. In some embodiments, the numerical parameters set forth in the written description and attached claims are approximations that can vary depending upon the desired properties sought to be obtained by a particular embodiment. In some embodiments, the numerical parameters should be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of some embodiments of the present disclosure are approximations, the numerical values set forth in the specific examples are reported as precisely as practicable. The numerical values presented in some embodiments of the present disclosure may contain certain errors necessarily resulting from the standard deviation found in their respective testing measurements. The recitation of ranges of values herein is merely intended to serve as a shorthand method of referring individually to each separate value falling within the range. Unless otherwise indicated herein, each individual value is incorporated into the specification as if it were individually recited herein. The recitation of discrete values is understood to include ranges between each value. In some embodiments, the terms “a” and “an” and “the” and similar references used in the context of describing a particular embodiment (especially in the context of certain of the following claims) can be construed to cover both the singular and the plural, unless specifically noted otherwise. In some embodiments, the term “or” as used herein, including the claims, is used to mean “and/or” unless explicitly indicated to refer to alternatives only or the alternatives are mutually exclusive. The terms “comprise,” “have” and “include” are open-ended linking verbs. Any forms or tenses of one or more of these verbs, such as “comprises,” “comprising,” “has,” “having,” “includes” and “including,” are also open-ended. For example, any method that “comprises,” “has” or “includes” one or more steps is not limited to possessing only those one or more steps and can also cover other unlisted steps. Similarly, any composition or device that “comprises,” “has” or “includes” one or more features is not limited to possessing only those one or more
Docket No.: 020455/WO features and can cover other unlisted features. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided with respect to certain embodiments herein is intended merely to better illuminate the present disclosure and does not pose a limitation on the scope of the present disclosure otherwise claimed. No language in the specification should be construed as indicating any non-claimed element essential to the practice of the present disclosure. Groupings of alternative elements or embodiments of the present disclosure disclosed herein are not to be construed as limitations. Each group member can be referred to and claimed individually or in any combination with other members of the group or other elements found herein. One or more members of a group can be included in, or deleted from, a group for reasons of convenience or patentability. When any such inclusion or deletion occurs, the specification is herein deemed to contain the group as modified thus fulfilling the written description of all Markush groups used in the appended claims. All publications, patents, patent applications, and other references cited in this application are incorporated herein by reference in their entirety for all purposes to the same extent as if each individual publication, patent, patent application, or other reference was specifically and individually indicated to be incorporated by reference in its entirety for all purposes. Citation of a reference herein shall not be construed as an admission that such is prior art to the present disclosure. Having described the present disclosure in detail, it will be apparent that modifications, variations, and equivalent embodiments are possible without departing from the scope of the present disclosure defined in the appended claims. Furthermore, it should be appreciated that all examples in the present disclosure are provided as non-limiting examples.
Docket No.: 020455/WO EXAMPLES The following non-limiting examples are provided to further illustrate the present disclosure. It should be appreciated by those of skill in the art that the techniques disclosed in the examples that follow represent approaches the inventors have found function well in the practice of the present disclosure, and thus can be considered to constitute examples of modes for its practice. However, those of skill in the art should, in light of the present disclosure, appreciate that many changes can be made in the specific embodiments that are disclosed and still obtain a like or similar result without departing from the spirit and scope of the present disclosure. EXAMPLE 1 – METHOD AND APPARATUS FOR MOTION TRACKING In this example, a three-dimension motion tracking method is developed that can track the transverse speed of motion from several micrometers per second to several centimeters per second while also tracking the axial speed of motion from several micrometers per second to several millimeters per second. The novel motion tracking method relies on performing circular scans over the sample with optical coherence tomography. The magnitude and direction of sample motion are estimated by analyzing the intraframe and interframe changes among the series of images obtained by the imaging device. This method allows fast and high-precision measurements of the motion tracking, which can provide a way for active motion compensation via feedback control for in vivo microscopic and macroscopic imaging applications in the future. Introduction Sample motion often prohibits people from acquiring good images or video quality without motion artifacts. Because of the heartbeat, respiration, or any unexpected organ peristalsis, the live objects are not stationary under the imaging system. For example, during the acquisition of confocal microscopy or two-photon microscopy in neuroscience studies, the sample motion will create motion artifacts on the images. These artifacts will forbid researchers from visualizing the
Docket No.: 020455/WO micrometer-level structures, recording long-period time-lapse data, or distinguishing the signals from artifacts. Meanwhile, since the imaging system’s field of view (FOV) is always confined, the sample motion may move out of the limited FOV during the image acquisition. Hence, the imaging system cannot capture the entire object structure and may lose valuable dynamic information after the data acquisition. Unique holders have been developed to limit the sample activity and to keep it within the FOV. However, the artificial limitation could also result in the behavior change of the live objects and further affect the universality and reliability of the experiment results and analysis. One can use motion tracking to provide feedback control to the imaging system. Thus, the imaging system will adaptively change its setting between each exposure to keep the object within its FOV and decrease or even eliminate the motion artifacts from one to the next. A different auxiliary system for motion tracking is needed to complete the feedback loop control to provide information for the primary imaging system. This extra motion tracking system should not add any new constraints to the sample and the primary system. Across many imaging modalities, optical coherence tomography (OCT), which is a non-invasive, label-free, three-dimensional (3D) imaging technique, is an ideal choice. Previous motion tracking algorithms with OCT relied on speckle decorrelation analysis. Liu et al. explored an essential relationship between the displacement of two adjacent axial scans (A-scans) and their cross-correlation coefficient (XCC). They used speckle decorrelation analysis because the speckle size is in the same order of magnitude as the OCT resolution. Furthermore, it will achieve better accuracy than the conventional object position tracking algorithm from frame to frame. Speckle decorrelation analysis uses signal formation and signal detection as the bridge to build the relationship between A-scans’ displacement and their value of XCC. Based on the assumed scattering sample distribution and OCT’s 3D point spread function, the signal formation of OCT’s A-scan can be derived from its position. Meanwhile, the XCC of two A-scans is defined based on their signal intensity. Therefore, the XCC of two A-scans can be known from their position
Docket No.: 020455/WO differences. This analysis was applied to track the nonconstant beam scanning speed in the OCT system and help to correct the motion artifacts caused by nonuniform probe scanning in hand-held OCT. However, since the scan pattern of the hand-held system is only one-dimensional (1D) in the transverse plane, the analysis can only track the magnitude of the sample motion in the transverse plane but not provide the direction information. For two-dimensional (2D) transverse motion tracking, Liu et al. applied a circular scan pattern over the sample and tracked both the magnitude and the direction of the sample motion. With the new angle information provided by the circular scan, one can also estimate the sample motion’s direction. The limitation is that the magnitude of the detectable speed range of the sample motion in the transverse plane is reported only from 0.2 mm/s to 1.4 mm/s, which is within one order of magnitude change. Since the sample motion may vary from the micro to the macro level, the motion tracking method needs to have a high dynamic range. Therefore, a new 3D motion tracking method is proposed to track the sample motion in an extended speed range from several micrometers per second to several centimeters per second. This ability is achieved by controlling the circular scan pattern settings. Moreover, two novel analysis models for motion tracking in the transverse plane are introduced. For different estimable speeds of the sample motion, instructions are provided on adequately adjusting the circular scan pattern settings and using different analysis models. In addition, a detailed analysis of extracting the direction information of the sample motion in the transverse plane is provided. On top of this, axial motion tracking is combined with our two novel analysis models to achieve 3D motion tracking. Several experiments were performed to prove the correctness of our new motion tracking method and demonstrate our method of tracking different designed transverse motion patterns and the movement of the mouse skin in vivo from the mouse under anesthesia. Methods When we repeatedly scan a light beam in a circle over a stationary sample (FIG.5A), we will acquire the same image in each scan, and the distances between
Docket No.: 020455/WO adjacent sampling points, or A-scans, on the circle are uniform. If the sample starts to move slowly in the transverse plane, or the XY plane (FIG.5B), the low-speed movement will not significantly change the distances between the sampling points within the exposure time of a single A-scan. However, the transverse movement will accumulate over a full circular scan and cause the object to have a small lateral displacement, which will result in an offset between the second circular scan and the first one (FIG.5C). If the sample moves fast, the relative speed between the sample motion and beam scanning speed will cause a large change, sufficient to change the spacing between A-scans within one circular scan (FIG.5D). Hence, through precise measurements of the displacements between every circular scan or A-scan within each circle, we can accurately determine the speed and direction of the sample motion. In various aspects, two analysis models are used: interframe analysis and intraframe analysis. A detailed analysis of how to extract the magnitude and direction of sample motion from interframe and intraframe analyses in various aspects is provided in additional detail below. In various aspects, the interframe analysis focuses on the intersection region between the adjacent circular scans and is aimed at tracking slow-speed motion (micrometers per second to millimeters per second). Due to the slow motion, two successive circular scans will be offset, and intersect each other, where they scan over the same position. The black dashed rectangle in FIG.5C outlines one of the intersection regions between adjacent circular scans. By measuring the intersection positions, we can derive the sample motion in the XY plane. In some aspects, the intraframe analysis focuses on the displacement between adjacent A-scans within one circular scan, aiming at tracking fast motion (millimeters per second to centimeters per second). As the bottom black dashed rectangle in FIG.5D shows, the displacement of two adjacent A-scans will become very close, because the beam scanning velocity and the sample motion have the same velocity direction (e.g., slow relative motion). Meanwhile, as the top black dashed rectangle in FIG.5D shows, there will be little overlap between two adjacent A-scans when the beam scanning velocity and the sample motion have opposite
Docket No.: 020455/WO velocity directions (e.g., fast relative motion). By analyzing the changes in the displacement of pairs of adjacent A-scans within one circular scan, we can learn how the sample moves in the XY plane in terms of both the magnitude and direction of the sample motion in the transverse plane. In some aspects, the intraframe analysis further extracts and analyses depth information contained within each OCT A-scan. To track axial motion, since, we analyze the cross-correlation of images between successive circular scans to learn the displacement in the axial direction and further determine the sample’s axial motion. Combining the sample motion in the transverse plane and axial motion obtained as described above, we can track the speed and direction of the sample’s motion in 3D over a high dynamic range by repeatedly performing circular scans over the sample. Furthermore, because OCT has a micron-scale resolution in both the axial and transverse dimensions and can acquire hundreds of thousands of A- scans per second, high spatial and temporal resolution measurements can be obtained. Circular scan patterning settings Three different parameters are used to control the scan pattern, the spacing between circular OCT A-scans, and beam scanning speed: the radius of the circular scan pattern ( ^^^^, FIG.5A); the number of sampling points (A-scans) acquired in one circular scan ( ^^^^
^^^^); the exposure time of each sampling point ( ^^^^
^^^^). Thus, each circular scan will take a total time ^^^^ = ^^^^
^^^^ ^^^^
^^^^, the spacing between adjacent A-scans in the transverse plane is and the beam scanning linear velocity is .
aspects, the circular scans may be performed at any suitable parameters and conditions without limitation. In one aspect, as described in the examples, the circular scans are performed in a counterclockwise direction starting
Docket No.: 020455/WO at three o’clock (90 deg.) as illustrated in FIGS.5B, 5C, and 5D. Without being limited to any particular theory, there are no significant differences in data acquisition or subsequent analysis between using counterclockwise or clockwise scanning directions. However, introducing a start phase ϕ ∈ [0,2 ^^^^) (FIG.5A) may decrease the errors in tracking the direction of the sample motion. Transverse motion tracking • Speckle decorrelation analysis In various aspects, to calculate the displacement between adjacent A-scans based on their intensity, a speckle decorrelation analysis is used. The XCC between two A-scans is calculated from their intensity using Eq.1, where ^^^^ is the number of pixels per A-scan, ^^^^ is the pixel index (axial position), ^^^^
^^^^( ^^^^) and ^^^
^^
^^^( ^^^^) are the intensity of two different A-scans at pixel index ^^^^, while ^^^^̅and σ
^^^^ are the mean and standard deviation of the A-scan intensity, ^^^^( ^^^^):
N I i
( ζ
) − I i I j
( ζ
) − I j
of the speckle development in OCT, the relationship between the square of the displacement ( ^^^^
2) of two A-scans and the XCC can be represented as expressed in Eq.2, where ω
0 is the OCT Gaussian beam waist, defined as the distance where the beam intensity drops by 1/ ^^^^, which is also the transverse optical resolution of the OCT system. The OCT Gaussian beam waist also defines the transverse resolution of the OCT system. ^ 1 ^ . E
q. 2
are to measure the square of displacement ^^^^
2between two A-scans by calculating the XCC value of the two A-scan intensity
Docket No.: 020455/WO profiles. The motion tracking method will further rely on this relationship, as described below. • Coordinate system setup of transverse motion tracking The origin of the coordinate system is defined at the center of the circular scan. ^^^^ and ^^^^ axes, shown in FIG.5A, define the transverse plane, while ^^^^ axis (perpendicular to ^^^^ and ^^^^ axes, not shown) defines the axial direction. When the sample is moving at a speed ^^^^
^^^^ in the transverse plane at an angle α, the direction between the sample velocity and the positive ^^^^-axis is as shown in FIG.5B. For this motion, the circular scan beam trace on the sample will be a cycloid. Assuming the first A-scan started at ^^^^ = 0, the coordinates of each position of the scanning beam
at any time ^^^^ over the sample� ^^^^ ( ^^^^ ) , ^^^^ ( ^^^^ ) � can be written as: x t = − v cos α t + R cos ^ ^
2π t ^ ^
3 4 •
When a sample moves slowly, interframe analysis uses the intersected region between two successive circular scans to derive the sample motion’s magnitude and direction, as shown in FIG.5C. For each A-scan on the first circular scan ^^^^
^^^^ at ^^^^
^^^^ =
( ^^^^ − 1
) ^^^^
^^^^, there exists a corresponding coordinate
(x ( t
i ), y ( t
i )
) , and for each
on the second circular scan ^^^^
^^^^ at ^^^^
^^^^ =
( ^^^^ − 1
) ^^^^
^^^^
exists the
corresponding coordinate ( x ( t j ) , y ( t j ) ) . ^^^^∗ and ^^^^∗ are used to represent the corresponding indices of
between two circular scans, as expressed in Eq.5: ,
y t ≈ x t , y t . 5
Docket No.: 020455/WO Using ^^^^( ^^^^
^^^^ ∗) ≈ ^^^^� ^^^
^^
^^^ ∗� from Eq. (5), we can derive ^^^^
^^^^ ^^^^, which is the sample motion the ^^^^-axis, where
^ 2π i ∗
^ ^ ∗
^ vm
( i ∗
− 1
) t e
− R cos
^ ( − 1
) 2π ^
= v
( N
+ j ∗
− 1
) t
− R cos
^ ( j
− 1
) ^ , ^ ^ m A e ^ ^ Eq.6 7
from ^^^^( ^^^^
^^^^∗) ≈ ^^^^� ^^^^
^^^^∗�, as expressed in Eq.8:
j ∗
− 1
^ ^ 2π i ∗
− ^ ^ ^ ^ ( ) ^
^ ( 1 ) ^
^ 8
^
^^^ ^^^^ , 2 is used to calculate the speed of the sample’s motion as expressed in Eq.9: v 2 2 2 m = v m + v m 9
in Eq.10:
Docket No.: 020455/WO ^ π ( j ∗
− i ∗ )
^ 2 R sin ^ ^ ^ ^ N A ^ ^ . Eq.10 of the intersection indi
∗ ∗ ∗
ces is defined as ε = ^^^^ − ^^^^ . When ^^^^
^^^^ ≫ ε
∗, the first-order approximation is used to simplify Eq. (10), resulting in Eq.11: 2π R ε ∗ v
m = 2 .
11
^^^^
^^^^ , ^
^^^ Eq.7 as expressed in Eq.12:
^ 2π ( j ∗
− 1 )
^ ^ 2π ( i ∗
− 1 )
^ sin
^ ^ − sin ^ ^
^ ^
^ ^ ^
^ ^ ^ ^ Eq.12
calculated from Eq. (12) and the
general relation ( x ( t j ) , y ( t j ) ) , as expressed in Eq. 13: π ( ∗
i + −
π .
Eq. 13
the interframe analysis may be implemented as described below. The square of the distance ^^^^
^ 2 ^
^^, ^^^^ between each A-scan on the first circular scan ^^^^
^^^^ and each A-scan on the second circular scan ^^^^
^^^^ is be defined as expressed in Eq.5A.
Docket No.: 020455/WO The intersected region between two circular scans corresponds to ^^^^
^ 2 ^
^^, ^^^^ ≈ 0, resulting in the relation of Eq.6A: Eq.6A
by the unknown magnitude and direction of the sample motion and the known value of the circular scan pattern settings. Thus, Eq.6A can be solved to get the relationship between the magnitude of the sample motion and the circular scan pattern settings as expressed in Eq.7A. ^^^^
∗ and ^^^^
∗ represent the corresponding indices of A-scans defining the intersection of two circular scans as expressed in Eq.5A above. The
pattern is expressed as Eq.8A: Eq.8A
index offset is defined as ε
∗ = ^^^^
∗ − ^^^^
∗. Typically, since ^^^^
^^^^ ≫ ^^^^
∗, we apply the first-order approximation of the sine function in Eq.7A and simplify it as expressed in Eq.9A. Eq.9A
9A, it is observed that by adjusting different settings of the circular scan patterns, a different detectable range of the sample motion can be achieved by the interframe analysis. Increasing ^^^^
^^^^ acquired in one circular scan, decreasing ^^^^ or increasing ^^^^
^^^^, the system can be tuned to detect slower speeds. Decreasing ^^^^
^^^^ acquired in one circular scan, increasing ^^^^ or decreasing ^^^^
^^^^, the system can be tuned to detect faster speeds.
Docket No.: 020455/WO • Intraframe analysis From Eq.9A and Eq.11, it is observed that by adjusting different settings of the circular scan patterns, a different detectable range of the sample motion can be achieved by the interframe analysis. Increasing ^^^^
^^^^ acquired in one circular scan, decreasing ^^^^ or increasing ^^^^
^^^^, the system can be tuned to detect slower speeds. Decreasing ^^^^
^^^^ acquired in one circular scan, increasing ^^^^ or decreasing ^^^^
^^^^, the system can be tuned to detect faster speed. When the sample speed increases, the distance between A-scans will no longer be uniform within one circular scan (FIG.5D). Thus, to measure the sample’s motion, we switch to analyzing the spacing between adjacent A-scans within a single circular scan using an interframe analysis method as described below. We
note the coordinates of ^^^^-th A-scan,� ^^^^ ( ^^^^ ^^^^ ) , ^^^^ ( ^^^^ ^^^^ ) �, and the coordinates of ^^^^ + 1-st A- scan,� ^^^^ ( ^^^^ ^^^^+1 ) , ^^^^ ( ^^^^ ^^^^+1 ) �. The 2
( ^^^^ ^^^^, ^^^^+1 ) between adjacent A- scans, ^^^^
^^^^ and ^^^^
^^^^+1, on one circular scan is Eq.14:
d
2 2 2 i
, i+ 1 = ( x ( t i ) − x ( t i + 1 ) ) + ( y ( t i ) − y ( t i + 1 ) ) . Eq. 14
2 ^
= ^ ^ 2 i −
− − 1 π ^ ^ ^ ^ d 2 v
cos α t
2 R
sin ^ ( ) ^
sin ^ π ^
^ ^ ^ . Eq.15 ^
^
^^^, ^^^^+1 the A-scan index ^^^^, as expressed in Eq.16:
d
2 i
, i+ 1 = A sin ( ψ ) + C , 16
with amplitude ^^^^ = 4 ^^^^
π ^
^^^ ^^^^
^^^^ ^^^^ sin� �, ^^^^ = 4 ^^^^ sin
2� � + ^^^^
^ 2 ^
^^ ^^^^
^ 2 ^
^^, and its phase at any
Docket No.: 020455/WO ^^^^, ^^^^ = ^^^^ − ϕ −
(2 ^^^^−1)π ^^^^
^^^^ . order approximation on sine function in the amplitude
(sin�
π ^^^^
^^^^� ≈ π ^^^^
^^^^) is used to derive the speed of the sample’s motion as expressed in Eq.17: 17
that reaches the minimum, where ψ =
3π 2. Thus, the direction of the sample’s motion is given as expressed in Eq.18:
i
∗ = ( 2 −
1 ) π α
+ 3π .
18
adjust the detectable range of the sample motion speed. Increasing the number of A-scans ( ^^^^
^^^^) acquired, decreasing the radius of the circular scan ( ^^^^) or exposure time of each A-scan ( ^^^^
^^^^) in one circular scan, the system can be tuned to detect faster speeds. Decreasing the ^^^^
^^^^ acquired, increasing ^^^^ or ^^^^
^^^^ in one circular scan, the system can be tuned to detect slower speeds. Axial motion tracking The cross-section image displacement between successive circular scans is analyzed to detect axial motion using the OCT intensity signal. Since the A-scan retrieves the depth information, if the sample motion has ^^^^-axis speed component, ^^^^
^^^^ ^^^^, there will be a position offset ∆ ^^^^ between cross-section images from the successive circular scans. FIG.6A shows the flowchart of axial motion tracking with OCT. Two successive circular scans are loaded and the first circular scan is designated as a reference scan.2D XCC analysis is applied to the second circular scan to calculate the similarity to the reference, searching from top to bottom. The displacement in
Docket No.: 020455/WO the depth direction, Δ ^^^^, is recorded at the position where the maximum XCC occurs. This displacement indicates that the sample moved by Δ ^^^^ in the axial direction after a circular scan cycle. Since each circular scan will take ^^^^ = ^^^^
^^^^ ^^^^
^^^^, the velocity of the sample’s axial motion can be expressed as Eq.19:
In some aspects, a region of interest (ROI) is chosen in the first and second cross-section images as the tracking reference, shown as a dashed rectangle in FIGS.6C and 6D. The ROI has a range of A-scans from ^^^^
^^^^to ^^^^
^^^^ and corresponding pixel in-depth from ^^^^
^^^^ to ^^^^
^^^^ . The A-scans of the first (FIG.6C) and second (FIG.6D) cross-sectional images are cropped with the same in-depth pixel numbers. The search starts from pixel number 1 to ^^^^ − ( ^^^^
^^^^ − ^^^^
^^^^), where ^^^^ is the total number of pixels in the depth of the cross-section images. The XCC between the cropped image and the referred ROI is calculated during the search. After completion of searching, a position ^^^^
^^^^, where the maximum value of XCC is achieved, is identified. The physical position offset from the first cross-section image to the second and the following cross-section images caused by the sample axial motion is expressed in Eq.19A, where δ
^^^^ represents the physical size of each pixel in the ^^^^ direction: Eq.19A
the sample axial motion velocity. OCT system setup In various aspects, a customized spectral domain OCT (SD-OCT) system, shown in FIG.7, performs the circular scan for our motion tracking method. The SLED (Exalos, EBD291023-02, ^^^^
0 = 1300 nm, Δ ^^^^ = 175.6 nm) is used as the broadband light source of our SD-OCT system, which provides the axial resolution
Docket No.: 020455/WO at about 5.33 ^^^^ ^^^^. The spectrometer (Cobra 1300, Wasatch Photonics) with a 2048- pixel InGaAs line-scan camera (Sensors Unlimited, GL2048) is used to measure the spectral interference pattern of the OCT signals. The maximum A-scan rate of the camera can achieve 147 kHz A-scan rate, which provides the fastest exposure time of A-scan ^^^^
^^^^ = 6.9 ^^^^ ^^^^. The slowest exposure time of A-scans is at ^^^^
^^^^ = 105 ^^^^ ^^^^. A 5X objective lens (Mitutoyo M Plan Apo NIR) is used to image the sample. The measured transverse resolutions, as well as the beam waist ^^^^
0, is 3.9 ^^^^ ^^^^. The circular scan pattern projected over the sample, shown as a circle superimposed over the sample in FIG.7, is generated by assigning two orthogonal sine wave voltages in the 2D Galvanometer (GVS002, Thorlabs) on the sample arm, shown in FIG.7. The voltage ( ^^^^
^^^^) sent to the 2D galvanometer system controls the radius of the circular scan, where ^^^^ = ^^^^
^^^^ ^^^^
^^^^. The coefficient ^^^^
^^^^ represents the physical displacement of the probing beam at the focal plane corresponding to the set voltage. It is calibrated by assigning different voltages to the 2D galvanometer system and measuring the corresponding physical scanning range by the US Air Force (USAF) 1951 test target (Max Levy, DA052). The linear ratio relationship between the physical scanning range and the assigned voltage becomes our calibrated ^^^^
^^^^. A two-axis precision motor stage (ALS130-100-NC-LTAS, Aerotech) and a Z- axis stepped motor stage (42BYG250Bk, Syntron) control the movement of the sample under the objective lens at a specified speed and direction. Signal processing of OCT A lab-designed software coded with C++ and CUDA using GPU (Nvidia GeForce RTX 2080) to increase the efficiency of signal processing of OCT raw data. It allocates the pinned memory for storing the collected data, which provides the fastest data transfer rate between CPU and GPU. After transferring the data to the GPU, the phase calibration and the dispersion compensation correct and calibrate the OCT interference signal. Then, the fast Fourier transformation (FFT) is performed to get the A-scans intensity from the interferogram. The log scale of the image intensity is stored for further analysis.
Docket No.: 020455/WO Motion tracking with high degrees of freedom When the sample is stationary, repeatedly single light beam circular scans will generate the same OCT images since each sampling point on the circular scans is from identical locations. When the sample starts to move slowly in the transverse plane, an offset between successive circular scans will be detectable. The intraframe analysis uses this offset to track the magnitude and direction of the transverse motion. When the sample moves faster, the relative speed between the sample motion and beam scanning will change significantly, resulting in changes in spacing between A-scans within one circular scan. The interframe analysis uses these spacing changes to track the transverse motion. Meanwhile, since each OCT A-scan contains depth information, the axial motion tracking of the sample can be measured by analyzing the cross-correlation of images or A-scans in the depth dimension during the successive circular scans. Therefore, with a single-beam circular scan, sample motion in three axes (x, y, z) can be measured and speed and direction information of the moving sample can be obtained. In some aspects, a parallel imaging OCT system (FIG.8) can be used to project multiple beams on the sample simultaneously. Circular scans can be performed for all the parallel imaging beams simultaneously, providing concurrent measurements of sample motion at each imaging spot. The multiple imaging beams can be arranged in different patterns, such as a line (FIG.9A), a cross (FIG.9B), a grid (FIG.9C), and any other suitable pattern without limitation. Combining motion measured concurrently at each sample location, additional information defining the sample motion can be calculated. For example, if the sample is a rigid body, rotation of the sample along the x, y, and z-axis (pitch, yaw, and roll) can be measured from the 3D motion of each sample point. For a non-rigid sample, the deformation of the sample at different locations can be measured by combining 3D motion measured at each sample point. Compared to the single-beam motion tracking, these additional degrees of freedom enabled by the parallel multi-beam scans will help characterize how the sample rotates or deforms. It will allow one to know more specifically about the mechanism and elastography of the imaged
Docket No.: 020455/WO sample than only knowing the velocity in three axes. Another advantage of parallel circular scans is that wide-field motion tracking will be achieved over a large sample size, for example, the eyeball. Sample preparation To validate the disclosed motion tracking method, multilayer tapes and slices of defrosted chicken breast and onions were used as the imaging targets. Since the mouse is a widely used experimental animal model in many applications, we also demonstrate the disclosed motion tracking by tracking a mouse’s respiration under anesthesia by imaging the fluctuations of the mouse’s skin. A wild-type mouse was anesthetized using a mixture of Ketamine and Xylazine. After removing the mouse’s hair to expose its skin, the mouse was placed in a supine position over a heating pad to perform the experiments described herein. Results XCC sensitivity calibration To verify the accuracy and sensitivity of the disclosed method of calculating the square of the displacement ( ^^^^
2) between two A-scans from a calculated XCC, the following experiment was conducted. Circular scans were performed over various stationary samples and a range of ^^^^
^^^^ values were selected for the circular scans to change ^^^^
2. According to the geometric derivation, one should have: We first kept the
in one circular scan to change the density of the A-scans. From the geometry, we calculated the relationship between the square of the displacement of two adjacent s in one circle ( ^^^^ ) and ^
4π2 ^ 2 A-scan
2 2 ^^^ ^
^^^, ^^^^+1 ^^^
^^^^, which is ^^^^
^^^^, ^^^^+1 =
^^^^2 ^^^^ . Combining the result with Eq. (2), we get:
Docket No.: 020455/WO 2 2 X
CC ^
4π R ^ i
, i+ 1 = exp ^ − 2 2 ^ . Eq. 20
of tape, onion, and chicken breast. As seen in FIG.10A, for each circular scan, we calculated the average of the XCC for all pairs of adjacent A-scans. We also calculated the theoretical value of XCC based on Eq. (20) and plotted the result as the solid line in FIG.10A. When ^^^^
^^^^ is large enough, which implies ^^^^
2 between adjacent A-scans is small, the experimental XCC values of all three samples are close to the theoretical values. However, when ^^^^
^^^^ is small, which implies ^^^^
2 between two A-scans is large, we observe mismatches between the calculated XCC and the theoretical value for each sample type. Due to the structure of the sample and the intrinsic noise of the OCT signal, an XCC noise floor (ρ
0) occurs below which the value of XCC will not fall no matter how much the displacement between two A-scans is increased. This noise floor influences the accuracy of the conversion from XCC to ^^^^
2. Hence, to ameliorate the noise floor effect, circular scan pattern settings are specified to provide sufficient overlap between adjacent A-scans. FIG.10B shows the data of FIG.10A with ^^^^
^^^^ converted to ^^^^
2 based on Eq. 16A. When ^^^^
^^^^ is small, ^^^^
2 between two A-scans will be large, while when ^^^^
^^^^ is large enough, ^^^^
2 between two A-scans will be small, and there will be less mismatch between the value of experimental XCC and theoretical XCC. From theory, when ^^^^
2 of two A-scans is sufficiently separated, there should be no correlation between them. However, due to the structure of the sample and the intrinsic noise of the OCT system, there exists an XCC noise floor ( ^^^^
0) affecting the accuracy of the conversion from XCC to real physical ^^^^
2. The XCC noise floor is affected not only by the system but also significantly by the samples. FIG.10C summarizes XCC data similar to the data of FIG.10A, with ^^^^
^^^^ maintained at a constant value of 7500. In this figure, a single A-scan within a circular scan pattern is selected and its neighboring A-scans in the circular scan pattern are used to calculate XCCs. As we increase the range of the neighbor to
Docket No.: 020455/WO increase ^^^^
2, it is observed the value of XCC also drops to the noise floor for different samples. Moreover, the values of the noise floor of different samples match the noise floor values observed in FIG.10A. The results of these experiments, in particular the observation of XCC noise floor for different samples, demonstrate that the circular scan pattern settings used to implement the motion tracking method as disclosed herein should result in sufficient overlap between the A-scans to provide for calculated XCC values that are well away from XCC noise floor. Transverse motion tracking verification • Interframe analysis for slow motion Using the slice of onion as a test example, the sample speed was set at 0.8 mm/s and the circular scan pattern settings were specified as ^^^^
^^^^ = 2000, ^^^^
^^^^ = 20 ^^^^ ^^^^ and ^^^^ = 0.9 mm. The sample was moved along the positive ^^^^-axis to validate the interframe analysis. FIGS.11A and 11B show two adjacent circular scan cross- section images cropped to include the same range of A-scan indices. The regions within the superimposed left and right triangles have shifted from FIG.11A to FIG. 11B. Highly similar structures are observed within the left and right rectangles. As indicated by the white arrows in FIG.11A, the structure in the left rectangle is shifted slightly to the right in FIG.11B, while the structure in the right rectangle is shifted slightly to the left. These shifts indicate that the adjacent circular scans have scanned the same region over the sample, so the offset, or the intersection, has been captured. To estimate the magnitude and direction of the sample motion from Eq.11 and Eq.13, we need to know the intersection indices for both the first circular scan and the second circular scan to derive the intersection index offset, ε
∗. Based on Eq.1, we computed the XCC between all pairs of A-scans in the first and the second circular scans. To find the intersection described above by Eq. (5), we recall that, the higher the value of XCC, the closer the two A-scans will be as
specified in Eq. 2 above. We first choose the ^^^^-th A-scan that most highly correlates with each ^^^^-th A-scan, shown plotted in FIG. 11C, where the x-axis represents each
Docket No.: 020455/WO ^^^^-th A-scan on the first circular scan, and the y-axis represents the maximum value of XCC among all the ^^^^-th A-scans. We applied XCC thresholding according to the bimodal distribution to remove the background noise, where the A-scans have a low correlation. The de-noised data of FIG.11C was used to determine the intersection index offset, ε
∗ based on the corresponding ^^^^
∗ and ^^^^
∗, as summarized in FIG.11D. Eq.11 of the interframe analysis is validated using measurements obtained using the system and samples as described above. The sample speed is fixed at ^^^^
^^^^ = 0.5 mm/s, ^^^^ = 0.9 mm and ^^^^
^^^^ = 20 ^^^^ ^^^^, but ^^^^
^^^^ is changed from 1000 to 8000. The relationship between ^^^^
^^ 2 ^
^ and | ^^^^
∗| is shown plotted in FIG.12A. Similar measurements were used to obtain the results of FIG.12B, but with fixed parameters ^^^^
^^^^ = 6000 and ^^^^
^^^^ = 20 ^^^^ ^^^^, and ^^^^ varied from 0.15 mm to 2.1 mm, and the relationship between
1 ^^^^ and | ^^^^
∗| calculated as described above. Similar measurements were used to obtain the results of FIG.12C with fixed parameters ^^^^
^^^^ = 6000 and ^^^^ = 0.9 mm and ^^^^
^^^^varied from 6.8 ^^^^ ^^^^ to 105 ^^^^ ^^^^. The relationship between ^^^^
^^^^ and | ^^^^
∗| is plotted in FIG.12C. The linear data in FIG.12A, 12B, and 12C exhibit a close match between the experiment results and the theoretical analysis, thereby validating Eq.9. Eq.11 also indicates that interframe analysis as disclosed above conducted using different selected settings of the circular scan pattern provides the ability to capture different ranges of sample motion. Three different circular scan pattern settings with three different speed ranges were tested. The first dataset was obtained using a circular scan pattern with ^^^^
^^^^ = 6000, ^^^^
^^^^ = 20 μs, ^^^^ = 0.9 mm, and sample motion speeds ranging from 10 µm/s to 0.9 mm/s. The second dataset was obtained using a circular scan pattern with ^^^^
^^^^ = 4000, ^^^^
^^^^ = 20 μs, ^^^^ = 0.9 mm, and sample speed was varied from 0.4 mm/s to 1 mm/s. The third dataset was obtained using a circular scan pattern with ^^^^
^^^^ = 2000, ^^^^
^^^^ = 20 μs, ^^^^ = 0.9 mm, and a sample speed ranging from 0.8 mm/s to 3 mm/s. The dots with error bars in FIG.12D summarize the experimental results, and the dashed fitted lines demonstrate the linearity between the sample speed and the absolute value of the intersection index offset,
|ε
∗|, from Eq.11). The three different circular scan pattern experiments
Docket No.: 020455/WO demonstrate that by changing different settings of the circular scan patterns, interframe analysis can cover a detectable speed range from several micrometers per second to several millimeters per second. From Eq. (11) and the above experimental results, ^^^^
^^^^ or ^^^^
^^^^ may be increased or ^^^^ may be decreased to detect slower speeds in this range. Similarly, ^^^^
^^^^ or ^^^^
^^^^ may be decreased or ^^^^ may be increased to detect slower speeds in this range. To validate the direction estimated by the interframe analysis from Eq.13 as disclosed herein, additional measurements were obtained as described above in which the sample motion speed was fixed at 0.5 mm/s and the circular scan pattern was specified with ^^^^
^^^^ = 2000, ^^^^
^^^^ = 20 μs, and ^^^^ = 1.5 mm. The direction of the sample motion was varied over 360
∘, and the corresponding ^^^^
∗ and ^^^^
∗values that reach the maximum of XCC in the adjacent circular scans were recorded. We then calculated α and plotted its changes corresponding to our experimental settings in FIG.12E. The close correspondence of the calculated movement direction α to the dashed line ( ^^^^ = ^^^^) demonstrates the direction detection ability of the interframe analysis, with an average error of ~1.3
∘. It is to be noted that when the angle change was closed to 90
∘ or 270
∘, the simulation and the experiment data exhibited some mismatches. Without being limited to any particular theory, two circular scans have only one intersection at these conditions, while another high correlation region is at the end of the first circular scan and the beginning of the second. The bimodal distribution will have errors in fitting the data with a peak around 0. To avoid this issue, the circular scan’s start phase (ϕ) can be specified away from such regions. • Intraframe analysis for fast motion Using a slice of defrosted chicken breast, we set the speed of sample motion
at 7.5 mm/s and set the circular scan pattern to ^^^^ ^^^^ = 10000, ^^^^ ^^^^ = 60 μs, and ^^^^ = 0.75 mm. We moved the sample along the positive ^^^^-axis to validate the intraframe analysis for fast motion. FIG.13A shows a single circular scan cross-section image of the chicken breast. The OCT image inside the right rectangle in FIG.13A
Docket No.: 020455/WO (zoomed-in view in FIG.13C) looks stretched because of the high concentration of A-scans within close positions that are highly correlated. In contrast, the OCT image inside the left rectangle in FIG.13A (shown zoomed in FIG.13B), is well-delineated. The A-scans here are from distinct locations, which are lowly correlated. Based on Eq.1, we calculated the XCC between all pairs of adjacent A-scans in one circular scan and plotted the results in FIG.13D. The maximum and minimum positions of the XCC value match our observation in FIG.13A. We further used Eq.2 to convert the XCC into ^^^^
2 between the adjacent A-scans, shown as dots in FIG.13E. We fitted the experimental results using a sine curve, as described in Eq. (16) and as shown as a superimposed dashed line in FIG.13E. From the amplitude and the phase of the fitted sine curve, we then calculated the magnitude and direction of the sample motion. To validate Eq.17 of the intraframe analysis, where the sample motion is derived, we first set the sample speed to ^^^^
^^^^ = 12 mm/s and the circular scan pattern settings to ^^^^ = 1.2 mm and ^^^^
^^^^ = 20 μs, but changed ^^^^
^^^^ from 6000 to 11000 and plotted the relationship between the amplitude of the sine curve of ^^^^
^ 2 ^
^^, ^^^^+1 (Eq. 16) and
1 ^^^^
^^^^ in FIG.14A. We then set ^^^^
^^^^ = 8000 and ^^^^
^^^^ = 20 μs, but changed ^^^^ from 0.12 to 1.2 mm, and plotted the relationship between the amplitude of the sine curve and ^^^^ in FIG.14B. Last, we set ^^^^
^^^^ = 6000 and ^^^^ = 0.9 mm, but varied ^^^^
^^^^ from 6.8 μs to 105 μs, and plotted the relationship between ^^^^
^^^^ and the amplitude of the sine curve in FIG.14C. In all cases, for the experimental results in FIGS.14A, 14B, and 14C, were fitted them with dashed lines. The agreement of the linear relationship with the amplitude of the sine curve validates the relationship shown in Eq.17. Similar to the interframe analysis, Eq.17 in the intraframe analysis tells us that changing the circular scan patterns will lead to different ranges of detectable motions. As a test, we set three different circular scan patterns and varied the
sample speed ranges. The first circular scan pattern settings were ^^^^ ^^^^ = 12000, ^^^^ ^^^^ = 20 μs, and ^^^^ = 1.2 mm, and we varied the sample speed from 1 mm/s to 6 mm/s. The second pattern settings were ^^^^
^^^^ = 6000, ^^^^
^^^^ = 20 μs, and ^^^^ = 0.9 mm and we
Docket No.: 020455/WO
varied the speed from 4 mm/s to 16 mm/s. The third pattern settings were ^^^^ ^^^^ = 5000, ^^^^ ^^^^ = 20 μs, and ^^^^ = 0.72 mm, and we varied the speed from 14 mm/s to 22 mm/s. FIG.14D shows the amplitude of the sine curve (Eq.16) for each of these conditions in blue, black, and green dots with error bars. We also superimposed dashed linear fitted lines in FIG.14D to show the linear correspondence between the sample speed and the amplitude of the sine curve from Eq.17. The experimental results show that by specifying the settings defining the circular scan patterns, we can use intraframe analysis to cover the detectable speed range from several millimeters per second to several centimeters per second. From Eq.17 and the results described above, to detect the slower speeds in this range with intraframe analysis, we can decrease ^^^^
^^^^, increase ^^^^, or increase ^^^^
^^^^. To detect faster speeds in this range with intraframe analysis, we can increase ^^^^
^^^^, decrease ^^^^, or decrease ^^^^
^^^^. Finally, to validate the effectiveness of Eq.13 in detecting the direction of the sample motion correctly, the speed of the sample motion is fixed at 7.5 mm/s, and the circular scan pattern is set at: ^^^^
^^^^ = 10000, ^^^^
^^^^ = 60 ^^^^ ^^^^ and ^^^^ = 0.75 mm again. The direction of the sample motion is varied from 0 to 2π. The sine curve Eq.11 phase is obtained as described above and the experimentally estimated direction of the sample motion is calculated. The ^^^^-axis in FIG.14E represents the set angle in the stage settings that control the sample motion direction. The ^^^^-axis in FIG.14E represents the calculated angle in the tracking results from the sine curve phase. The dashed line ( ^^^^ = ^^^^) is also superimposed over the calculated data in FIG.14E. A close match between the set angle and the calculated angle was observed in FIG. 14E, demonstrating that intraframe analysis may be used to detect the direction of the sample motion. Based on the standard deviation of the experiment results from the y=x lines in FIG.14E, the error in direction detection is around 4.23
∘. • Transverse motion tracking extracts motion pattern To demonstrate the generality of the motion tracking method in the transverse plane, measurements were obtained as described above for a sample moving in a predetermined 2D pattern (FIG.15A). The pattern included four basic
Docket No.: 020455/WO motion conditions: 1) Both speed and direction change at the same time, 2) Speed changes, but direction does not, 3) Direction changes, but speed does not, and 4) Neither speed nor direction changes.2D transverse sample motion patterns were designed and coded into the X-Y stage controller software to move the sample at different speeds and directions. The stage movement is started and the circular scan acquisition is synchronously started as described above. The circular scanning pattern is set at ^^^^
^^^^ = 4000, ^^^^ = 0.9 mm, and ^^^^
^^^^ = 20 ^^^^ ^^^^. The magnitudes and directions of the sample motion are calculated based on the circular scan data and are plotted in FIGS.15B and 15C. The calculated speeds and angles summarized in FIGS.15B and 15C were combined to reconstruct the extracted sample motion pattern. The reconstructed motion pattern, shown as overlaid arrowheads in FIG. 15A, exhibits close agreement with the designed motion pattern used to specify specimen movement used during the acquisition of circular scan data. The average accuracy of the tracked speed is 98.1%, where the error is ~0.004 mm/s. The average accuracy of the tracked direction is ~99.7%, where the error is ~0.3
∘. Axial motion tracking verification To validate axial motion tracking using the disclosed tracking methods, a chicken breast sample prepared as described above was placed on the movement stage programmed for no lateral (X-Y) movement and for axial (Z) movement at different speeds. To extract the ^^^^-axis speed component, ^^^^
^^^^ ^^^^ , we followed the flowchart shown in FIG.6A. FigS.16A and 16B are cross-section images of OCT circular scans before and after the Z-stage movement. The superimposed solid lines in FIGS.16A and 10B represent the surface location before the axial motion. The superimposed dashed line in FIG.16B represents the surface location after the axial motion shift of Δ ^^^^. The reference ROI in the cross-section images of the first circular scan is determined and the XCC between the reference ROI and the subsequent cross- sectional images is calculated. The XCC is plotted as a function of the axial pixel shifts in FIG.14C. The line represents the reference ROI location when the sample
Docket No.: 020455/WO does not move. From the index of the axial pixel that reaches the maximum XCC in the line, the value of ^^^^
^^^^ in Eq.13 is received. When the axial sample motion speed is equal to 2.7 mm/s, in the second circular scan cross-section images, one starts to crop and search the maximum XCC from the first pixel and plot the result in the line in FIG.16C. From the index of the axial pixel where the XCC reaches the maximum in the line, the value of ^^^^
^^^^ in Eq.13 is received. From ^^^^
^^^^ and ^^^^
^^^^, the axial pixel offsets can be calculated and converted to the physical axial motion speed. Though we demonstrated the axial shift at the sample’s surface, we used the sample’s structural information near the surface for motion tracking. We changed the axial sample motion speed from 0.3 mm/s to 2.7 mm/s, then we plotted the axial pixel offset and the axial speed as dots with error bars and a linear fit of the results as a superimposed dashed line in FIG.16D. The linearity again proves the correctness of Eq.19, which validates the ability of axial motion tracking using the methods disclosed herein. Motion tracking of mouse skin in vivo To demonstrate that the motion tracking method disclosed herein can be applied to in vivo imaging, the following experiments were conducted. Circular scans were performed on the live mouse under anesthesia. The mouse was placed under the objective lens of the OCT system and the mouse skin near the mouse’s chest was focused on, as shown in FIG.17A. Since a mouse typically has a very high breathing frequency, a suitable frame rate was needed for axial motion tracking. Circular scan pattern settings used were ^^^^
^^^^ = 600, ^^^^ = 0.15 mm and ^^^^
^^^^ = 10 ^^^^ ^^^^ to track the mouse heart beating with axial motion tracking. The experiment results are shown in FIG.17B. The average beating rate calculated by the time differences from peak to peak is 195.22 bpm, which matches a healthy mouse's heart beating rate under anesthesia. During breathing, the absolute axial motion speed of the chest reached ~5.5 mm/s. The mouse was subsequently placed on the moving stage to give it a transverse motion speed. To test our motion tracking in 3D, we added a transverse
Docket No.: 020455/WO motion to the mouse by moving the stage at 0.8 mm/s. In this experiment, the XY plane motion was provided by the stage, and the Z direction motion was provided by the mouse breathing. Considering the small size of the mouse’s chest, we
applied interframe analysis and switched our circular scan pattern settings to ^^^^ ^^^^ = 1500, ^^^^ = 0.3 mm, and ^^^^ ^^^^ = 40 μs. From Eq. 11, we then calculated the magnitude of the average transverse motion as 0.78 mm/s. These results demonstrated that our motion tracking could be applied to future in vivo imaging scenarios. Discussion The experiments described above demonstrated that by changing different settings of the circular scan pattern, both intraframe and interframe analysis can provide different detectable speed ranges in transverse motion tracking. Furthermore, it is observed that there is an overlap detectable speed range between both analyses, which provides the bridge for switching different analysis approaches when the speed of the sample motion becomes faster or slower. The minimum detectable speed is characterized by the interframe analysis, in particular, determined based on Eq.9. Since the absolute value of the intersection index offset ε
∗ is an integer, the minimum value of it will be 1. In other words, if ε
∗ goes to zero, the motion between two adjacent circular scans cannot be detected, and the interframe analysis will assume that the sample does not move. In this case, it means that in one circular scan time ^^^^ = ^^^^
^^^^ ^^^^
^^^^, the second circular scan does not move far more than one pixel from the first circular scan, where two circular scans are almost identical. Therefore, the theoretical minimum detectable speed of interframe analysis is: 2
π R 21
Eq.21, two options may be selected. The first option is to consider analyzing the third circular scan or even additional subsequent circular scans and find the intersected region between the subsequent and first circular scans. A second option
Docket No.: 020455/WO is to consider changing the settings of the circular scan patterns. ^^^^
^^^^ or ^^^^
^^^^ can be increased to slow down the single circular scan period. Therefore, the sample motion will move a noticeable distance between two adjacent circular scans. ^^^^ can also be decreased or ^^^^
^^^^ can be increased to have a finer resolution to track the sample motion. However, since the pixel size of each A-scan is
2πR ^^^^
^^^^ , pixel size should be considered when manipulating the value of ^^^^
^^^^ and ^^^^, where the pixel size
should be greater or equal to half of the beam waist ω
0 to with the sampling theorem to avoid the effect from the XCC noise floor as described above. If the beam waist is decreased, thereby enhancing the transverse resolution of the OCT imaging system, better data resolution can be obtained to detect lower speeds. The maximum detectable speed is determined by the intraframe analysis and is limited by the XCC noise floor (ρ
0). When we convert the XCC from acquired data to the square of the distance ( ^^^^
2) between adjacent A-scans within one circular scan based on Eq. (2) as described above, once the relative speed between the sample motion and the beam scanning velocity is too large, the conversion will lose accuracy. To retrieve all the information of the fitted sine curve in Eq.16 correctly, we need max� ^^^^
^ 2 ^
^^, ^^^^+1� ≤ ^^^^
0 2, where ^^^^
0 is the maximum displacement that can be detected from the noise floor of the XCC, as calculated in Eq.22: 2 2 ^ 1 ^ . E
q. 22
that can be detected is derived in Eq.23: v
m d 2π max
= 0 t
− R . e N A t e
Eq. 23 If the speed we want to detect is faster than Eq.23, we also have two options for tracking the faster speed under two conditions. Intraframe analysis relies very heavily on a suitable relative speed between the beam scanning velocity and the sample motion. If the sample speed is slower than the scanning velocity, but it
Docket No.: 020455/WO exceeds the value Eq.23 defines, ^^^^
^^^^ may be increased or ^^^^
^^^^ may be decreased to reduce the beam scanning speed and get closer to the magnitude of the sample motion, or ^^^^ can be decreased to increase the correlation of the two adjacent A- scans. In these ways, changing the circular scan pattern settings makes the sine curve of Eq.16 better defined. If the sample speed is faster than the beam scanning speed, we can choose to focus on the minimum of the sine curve and its A-scan index, as Eq.16 expressed. In this case, the maximum speed of detection can be extended to Eq.24: v
m d 0 2π R max = +
. 24
pattern settings are indispensable for our high dynamic range motion tracking. If we only use interframe analysis, when we decrease ^^^^
^^^^ or increase ^^^^ to detect high- speed motion, the displacement between A-scans will increase, and the intersection of two adjacent circles cannot provide enough overlap to calculate XCC accurately. Thus, the interframe analysis will lose its accuracy for high-speed motion. If we only use intraframe analysis, although the highest detection speed has been clarified in Eq.23 and Eq.24, when we decrease ^^^^
^^^^ or increase ^^^^ in order to test the low- speed motion, there will be insufficient overlap between adjacent A-scans for accurate XCC calculation. Both interframe and intraframe analysis are complementary in their speed detection range. In the actual application of our 3D motion tracking, circular scanning will be continuously performed over the sample. We may not need to always start the analysis from the place where the start phase of scanning is equal to 0, and we also may not need to always scan counterclockwise. Introducing a new start phase variable or clockwise scanning direction will not change the essence of our analysis models. Prior knowledge or estimation of the measured speed range can help set the initial parameters and choose whether to use interframe or intraframe analysis. In practice, for the unknown speed range, scan parameter settings may be
Docket No.: 020455/WO adaptively changed to achieve optimal performance. Scan parameters may be fine- tuned to clearly distinguish the index offset in interframe analysis or the amplitude of the sine curve in intraframe analysis. Meanwhile, the choice of interframe or intraframe analysis also depends on which method can more clearly extract the magnitude and direction of the sample motion. The result of our motion tracking is the average value within the data acquisition time, which is also determined by our frame rate or temporal resolution. It equals the reciprocal of the multiplication of the adjustable parameters ^^^^
^^^^ and ^^^^
^^^^. In some aspects, the disclosed method may be provided with feedback loop control in the future, to reduce the latency, by integrating the analyses on the graphics processing unit (GPU) or field programmable gate array (FPGA) of a computing device configured to implement the disclosed method of motion analysis. EXAMPLE 2 –APPLICATIONS OF MOTION TRACKING The motion tracking technology described in the current disclosure can be directly applied to various applications, ranging from measuring and compensating sample motions due to breathing, heartbeat, muscle contractions, shakes of hand- held devices, or other movements, to stabilizing microscopic image acquisitions for biomedical research. It can also be used to provide real-time motion corrections and guidance to improve robotic surgery and operations (such as intraocular operations). Unlike traditional motion tracking methods based on tracking sample surface features using a 2D camera, this technique utilizes image information from inside the sample enabled by OCT imaging technology. This enables high-resolution motion tracking in the depth direction in addition to tracking transverse motions. Combining analysis of signals from different depths also helps improve the signal- to-noise ratio (SNR) of the measurements. The proposed method can also be used in combination with other motion tracking method(s) in order to provide complementary information on sample motion at different spatial and temporal scales. Other imaging modalities, such as ultrasound, MRI, and CT, where internal
Docket No.: 020455/WO images of objects can be obtained may also benefit from this technology. Beam forming and scan patterns similar to what is described herein will be needed in order to retrieve sample motion information. Spatial and temporal resolution and dynamic range of measurements depend on the specification of each imaging modality and will need to be considered for individual cases. In addition to tracking microscopic movement or motions, the proposed method may be extended to track motion at higher speed by changing the range and resolution of the measurements. For example, using a narrow linewidth light source and fast sweeping laser, this invention can be used for ranging measurements and precisely detecting the distance of objects, such as cars, planes, rockets, etc., similar to LiDAR. The beam scanning and data processing method presented here may be used to analyze object motion with high accuracy and enable measurements of several degrees of freedom, extending the utilities of this technology to industrial applications, such as autonomous vehicles, drones, quality control, etc.