Nothing Special   »   [go: up one dir, main page]

WO2023068773A1 - Method of recommending form factor of rollable smartphone - Google Patents

Method of recommending form factor of rollable smartphone Download PDF

Info

Publication number
WO2023068773A1
WO2023068773A1 PCT/KR2022/015889 KR2022015889W WO2023068773A1 WO 2023068773 A1 WO2023068773 A1 WO 2023068773A1 KR 2022015889 W KR2022015889 W KR 2022015889W WO 2023068773 A1 WO2023068773 A1 WO 2023068773A1
Authority
WO
WIPO (PCT)
Prior art keywords
mobile device
state
form factor
current
predicted
Prior art date
Application number
PCT/KR2022/015889
Other languages
French (fr)
Inventor
Pulkit AGRAWAL
Kaushal Kumar
Gaurav Mishra
Original Assignee
Samsung Electronics Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co., Ltd. filed Critical Samsung Electronics Co., Ltd.
Publication of WO2023068773A1 publication Critical patent/WO2023068773A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the present invention generally relates to predicting a form factor for a mobile device, and more particularly relates to methods and systems for predicting the form factor for a mobile device based on a current operating state of the user and a current digital state of the mobile device.
  • the flexible display screens are gaining popularity in becoming a mainstream technology, as it is being employed in various devices, such as televisions, wearable devices, smartphones, tablet computers, etc., and even as standalone flexible displays. Further, for mobile phones, rollable smartphones have been becoming available, in which the user may manually adjust/roll the display screens.
  • an indicative parameter for different size mobile devices is known as a form factor.
  • the form factor of a mobile phone is its size, shape, and style, as well as the layout and position of its major components.
  • the form factor for a device may also be generally expressed as a device holding size.
  • Currently available form factors may be categorized into the standard examples of phone, phablet, band, or tablet modes. Accordingly, each form factor corresponds to a different size of mobile devices.
  • the manually rollable phones support variable form factors (sizes) due to flexible display screens.
  • the current rollable phones do not include any intelligence to automatically adjust the display screen.
  • the handling and operation of smartphones is hampered when the rolling function is not smart/intelligent.
  • a method of providing a form factor for a mobile device associated with a user comprises detecting a change in at least one of a current operating state of the user and a current digital state of the mobile device. Further, the method comprises predicting a form factor for the mobile device based on the current operating state of the user and the current digital state of the mobile device. Furthermore, the method comprises determining whether to accept the predicted form factor based on a number of touch interactions required for the current digital state of the mobile device and a number of touch interactions supported by the predicted form factor and the current operating state of the user. Additionally, the method comprises providing the predicted form factor for the mobile device based on a determination to accept the predicted form factor, wherein the providing the predicted form factor corresponds to a recommendation to modify physical size of the mobile device.
  • a system of providing a form factor for the mobile device associated with a user comprises a detection module configured to detect a change in at least one of a current operating state of the user and a current digital state of the mobile device. Further, the system comprises a form factor prediction module in communication to the detection module and configured to predict a form factor for the mobile device based on the current operating state of the user and the current digital state of the mobile device. Furthermore, the form factor prediction module is configured to determine whether to accept the predicted form factor based on a number of touch interactions required for the current digital state of the mobile device and a number of touch interactions supported by the predicted form factor and the current operating state of the user. Additionally, the form factor prediction module is configured to provide the predicted form factor for the mobile device based on a determination to accept the predicted form factor, wherein the predicted form factor corresponds to a recommendation to modify physical size of the mobile device.
  • Figure 1 illustrates an exemplary use case of modifying a form factor for a mobile device associated with a user, according to an embodiment of the present invention
  • Figure 2 illustrates a schematic block diagram of a mobile device for determining form factor based on current operating state of the user and/or digital state of the mobile device, according to an embodiment of the present invention
  • Figure 3 illustrates a schematic block diagram of modules of the mobile device for predicting form factor based on current operating state of the user and/or digital state of the mobile device, according to an embodiment of the present invention
  • Figures 4a and 4b illustrate schematic block diagrams for predicting form factor without and with validation respectively, according to various embodiments of the present invention
  • Figure 5a illustrates a detailed block diagram of form factor prediction model and validation model for predicting form factor, according to an embodiment of the present invention
  • Figure 5b illustrates a detailed block diagram of form factor prediction model and the validation model to predict form factor during training phase, according to an embodiment of the present invention
  • Figure 6 illustrates an exemplary process flow for determining form factor based on current operating state of the user and/or digital state of the mobile device, according to an embodiment of the present invention.
  • Figure 7 illustrates an exemplary use case of acceptance of a predicted form factor for a mobile device by the validation module, according to an embodiment of the present invention
  • Figure 8 illustrates an exemplary use case of rejection of a predicted form factor for a mobile device by the validation module, according to an embodiment of the present invention.
  • Figure 9 illustrates another exemplary use case of rejection of a predicted form factor for a mobile device by the validation module, according to an embodiment of the present invention.
  • any terms used herein such as, “includes,” “comprises,” “has,” “consists,” and similar grammatical variants do not specify an exact limitation or restriction, and certainly do not exclude the possible addition of one or more features or elements, unless otherwise stated. Further, such terms must not be taken to exclude the possible removal of one or more of the listed features and elements, unless otherwise stated, for example, by using the limiting language including, but not limited to, “must comprise” or “needs to include.”
  • Coupled refers to any logical, optical, physical or electrical connection, link or the like by which electrical or magnetic signals produced or supplied by one system element are imparted to another coupled or connected element.
  • phrases and/or terms including, but not limited to, "a first embodiment,” “a further embodiment,” “an alternate embodiment,” “one embodiment,” “an embodiment,” “multiple embodiments,” “some embodiments,” “other embodiments,” “further embodiment”, “furthermore embodiment”, “additional embodiment” or other variants thereof do not necessarily refer to the same embodiments.
  • one or more particular features and/or elements described in connection with one or more embodiments may be found in one embodiment, or may be found in more than one embodiment, or may be found in all embodiments, or may be found in no embodiments.
  • the present disclosure relates to intelligent systems and methods to predict an ideal form factor for a mobile device based on current operating and digital state during operation of the mobile device.
  • the form factor is an indicative parameter for different sizes of mobile devices.
  • the form factor of the mobile phone is its size, shape, and style, as well as the layout and position of its major components.
  • the form factor may correspond to a holding size of the mobile device.
  • the form factors may be categorized into phone, phablet, band, or tablet modes. Based on the predicted form factors, the operating system of the mobile may automatically adjust the physical dimension(s) and/or display screen of the mobile device.
  • Figure 1 illustrates an exemplary use case 100 of modifying a form factor for a mobile device 102 associated with a user 104.
  • the use case 100 depicts a mobile device 102 (represented as 102a, 102b, 102c, and 102d in different form factors) associated with a user 104 (represented as 104a, 104b, 104c, and 104d in different operating conditions).
  • Examples of mobile device 102 may include, a mobile phone, a tablet, a phablet, and a wearable band-sized device.
  • the mobile device 102 may be an adjustable display device.
  • the mobile device 102 may be configured to modify its physical display screen size based on modification of at least one physical dimension such as length, width, etc.
  • the size of user interface or display interface may also be modified.
  • the mobile device 102 may be a communication device having two or more modes or form factors, wherein each of the modes correspond to the mobile device 102 functioning as a tablet, a mobile phone, a phablet, and a wearable band-sized device, respectively. Each of the modes corresponds to a different form factor for the mobile device 102.
  • the mobile device 102 may be configured to determine a current operating state of the user 104 based on one or more of a physical state, a holding state, and an environmental state of the user 104 during operation of the mobile device 102.
  • the mobile device 102a may determine the physical state of the user 104a, i.e., the user 104a is walking. Further, the mobile device 102a may determine the holding state, i.e., the user 104a is holding the mobile device 102a using one hand. Additionally, the mobile device 102a may determine the environmental state, i.e., the user 104a is currently on a street. The operating state of the user is determined cumulatively based on one or more of these states.
  • the mobile device 102 may be configured to determine a current digital state of the user 104 based on state of at least one software component of the operating system of the mobile device 102 during its operation.
  • the mobile device 102a may determine one or more of, but not limited to, application history such as top or last 5 used mobile applications, mobile applications in foreground, mobile applications in background, work mode/private mode, battery level, network signal level, and actively paired device(s).
  • the mobile device 102a may be operating in a band mode, i.e., where the display size and physical dimensions of the mobile device may correspond to a wearable band.
  • the mobile device 102a may detect a change in at least one of the current operating state of the user 104b and the current digital state of the mobile device 102a.
  • the current operating state of the user 104b may include a change in environmental state of the user. Specifically, the user 104 may have moved from street to his home.
  • the mobile device 102a may predict a new form factor for the mobile device 102a.
  • the predicted form factor may correspond to phone mode, i.e., the mobile device 102a in band mode may be converted to physical dimensions and display size of the mobile device 102b in mobile phone mode.
  • the different form factor values or modes described in the present invention may correspond to following exemplary values of display sizes or the physical dimensions of the mobile device.
  • the size may correspond to 5.5 to 7 inches.
  • the size may correspond to 4.5 to 5.5 inches.
  • the size may correspond to less than 4.5 inches.
  • the size may correspond to greater than 7 inches.
  • these values are only for exemplary purposes, and may vary.
  • the number of form factor modes may be categorized into more than above discussed 4 modes, and each mode may have further distinguishing value of display sizes or physical dimensions.
  • the predicted form factor may be validated based on a number of touch interactions required for the current digital state, and further based on a number of touch interactions supported by the predicted form factor and the current operating state of the user 104b.
  • the predicted form factor may be used by the mobile device 102a to modify its current mode of operation to mobile device 102b.
  • the mobile device 102a in band mode may be automatically modified to mobile phone mode 102b in response to the validation.
  • the mobile device 102 may predict that an ideal/appropriate form factor in the current conditions would be phablet mode. Further, based on the validation, the mobile device 102b may automatically modify its mode from mobile phone 102b to phablet mode 102c by changing one or more physical dimensions and display size.
  • the ideal form factor may be predicted as tablet mode.
  • the mobile device 102c may automatically modify its physical dimensions and display size to tablet mode 102d.
  • Figure 2 illustrates a schematic block diagram 200a of a mobile device 200 for determining a form factor based on current operating state of the user and/or digital state of the mobile device 200.
  • the mobile device 200 may be configured to determine the form factor based on current operating state of the user and/or digital state of the mobile device 200.
  • the mobile device 200 may include at least one processor 202, Input/Output (I/O) interface 204, a display motor 206, location module 208, sensors 210, a transceiver 212, and a memory 214.
  • the memory 214 may further include modules 216, database 218, an operating system 220, and a display manager 222.
  • the mobile device 200 may be in communication with a connected device 224 to determine the current operating state of the user and/or the digital state of the mobile device 200.
  • Examples of connected device 224 may include, but not limited to, an Internet of Things (IoT) device, a docking station, smart glasses, and a wearable device such as a smart watch.
  • the connected device 224 may include at least a transceiver 226, sensors 228, processor 230, and a memory 232.
  • the mobile device 200 may be configured to determine the current operating state of the user and the digital state of the mobile device 200 independently and without any connected device.
  • the mobile device 200 may be an adjustable display device.
  • the mobile device 200 may be configured to automatically modify its physical display screen size by modifying at least one physical dimension such as length, width, etc. Further, based on modification in physical dimension(s) of the mobile device 200, the size of user interface or display interface may also be modified or vice-versa.
  • the mobile device 200 may be a communication device having two or more modes, wherein each of the modes correspond to the mobile device 200 functioning as a tablet, a mobile phone, a phablet, and a wearable band-sized device, respectively. Each of the modes correspond to a different form factor for the mobile device 200.
  • the processors 202 and 230 may each include at least one data processor for executing processes in Virtual Storage Area Network.
  • the processors 202 and 230 may each include specialized processing units such as, integrated system (bus) controllers, memory management control units, floating point units, graphics processing units, digital signal processing units, etc.
  • the processors 202 and 230 may be disposed in communication with one or more input/output (I/O) devices via the I/O interface 204 and transceiver 226 respectively.
  • the I/O interface 204 may employ communication protocols/methods such as, without limitation, audio, analog, digital, monaural, RCA, stereo, IEEE-1394, serial bus, universal serial bus (USB), infrared, PS/2, BNC, coaxial, component, composite, digital visual interface (DVI), high-definition multimedia interface (HDMI), radio frequency (RF) antennas, S-Video, VGA, IEEE 802.n /b/g/n/x, Bluetooth, cellular (e.g., code-division multiple access (CDMA), high-speed packet access (HSPA+), global system for mobile communications (GSM), long-term evolution (LTE), WiMax, or the like), etc.
  • CDMA code-division multiple access
  • HSPA+ high-speed packet access
  • GSM global system for mobile communications
  • LTE long-
  • the mobile device 200 may communicate with one or more I/O devices.
  • the input device may be an antenna, keyboard, mouse, joystick, (infrared) remote control, camera, card reader, fax machine, dongle, biometric reader, microphone, touch screen, touchpad, trackball, stylus, scanner, storage device, transceiver, video device/source, etc.
  • the output devices may be a printer, fax machine, video display (e.g., cathode ray tube (CRT), liquid crystal display (LCD), light-emitting diode (LED), plasma, Plasma Display Panel (PDP), Organic light-emitting diode display (OLED) or the like), audio speaker, etc.
  • CTR cathode ray tube
  • LCD liquid crystal display
  • LED light-emitting diode
  • PDP Plasma Display Panel
  • OLED Organic light-emitting diode display
  • the processor 202 may be disposed in communication with a communication network via a network interface.
  • the network interface may be the I/O interface 204.
  • the network interface may connect to a communication network.
  • the network interface may employ connection protocols including, without limitation, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/internet protocol (TCP/IP), token ring, IEEE 802.11a/b/g/n/x, etc.
  • the communication network may include, without limitation, a direct interconnection, local area network (LAN), wide area network (WAN), wireless network (e.g., using Wireless Application Protocol), the Internet, etc.
  • LAN local area network
  • WAN wide area network
  • wireless network e.g., using Wireless Application Protocol
  • the mobile device 200 may communicate with the connected device 224.
  • the network interface may employ connection protocols including, but not limited to, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/internet protocol (TCP/IP), token ring, IEEE 802.11a/b/g/n/x, etc.
  • the processor 230 of the connected device 224 may communicate with the mobile device 200 using an I/O interface (not shown) and may transmit/receive data using transceiver 226.
  • the memory 214 may be communicatively coupled to the at least one processor 202.
  • the memory 232 of the connected device 224 may be communicatively coupled to the at least one processor 230.
  • Each of the memory 214 and 232 stores data, instructions executable by the at least one of processors 202 and 230 respectively.
  • the memory 214 may include one or more modules 216 and a database 218 to store data.
  • the one or more modules 216 may be configured to perform the steps of the present disclosure using the data stored in the database 218, to determine an ideal form factor for the mobile device and adjust the physical dimensions of the mobile device 200 based on the determined form factor.
  • each of the one or more modules 216 may be a hardware unit which may be outside the memory 214.
  • the display motor 206 may be configured to automatically modify one or more physical dimensions of the mobile device 200 in response to determination of an ideal form factor for the mobile device 200 based on the current operating and digital states.
  • the display motor 206 may be configured to produce a torque to fold, unfold, roll, or unroll the flexible display of the mobile device from one size to another size.
  • the display motor 206 may also be coupled with several other components, such as driver and gears to modify the size of the flexible display and physical dimensions of the mobile device. As the display motors and mechanism to adjust the flexible display are well-known in the art, these are not described here in detail.
  • the location module 208 may include a global positioning system (GPS).
  • GPS global positioning system
  • the location module 208 may be configured to provide current location of the user or mobile device 200 to the modules 216.
  • the current location may form a part of environmental state related parameters, while determining the current operating state of the user.
  • the location module 208 may include an online software application for determining the current location of the user or mobile device 200.
  • the sensors 210 may include one or more sensors for determining current operating state of the user, while operating the mobile device 200.
  • Various examples of the sensors 210 may include inertial, touch, pressure, and illumination sensors, for example, but not limited to, an accelerometer, gyroscope, barometer, and touch sensors.
  • the sensors 210 may be configured to provide inputs to the modules 216 to determine the current operating state of the user, which includes one or more of a physical state, a holding state, and an environmental state of the user.
  • the sensors 228 of the connected device 224 may also supplement the inputs required for determining the current operating state of the user.
  • the sensors 228 may include, but not limited to, an accelerometer, gyroscope, heart rate, presence, visual, illumination, and docking sensors.
  • the display manager 222 may be configured to modify the display size of the user interface.
  • the display manager 222 will have instantaneous value of display size available for content display at every step of transition so that the user interface can adjust. For example, when the form factor has to be modified from mobile phone mode to a tablet mode resulting in enlargement of the display of the mobile device 200, the display manager 222 is configured for re-segmentation of the current display. The re-segmentation is performed in a manner such that the change in currently presented display interface on the mobile device 200 is seamless.
  • the physical size of rolled value of display at any moment may be provided by the display motor 206 to the hardware interfacing layer, e.g., device hardware drivers. If the value of the physical size of rolled display is known at any moment, then the operating system may render/readjust the user interface (UI) components as per the available display size at disposal. Either the UI components may be resized to fit in the reduced/enlarged size or they may be moved in/out of the UI interface depending on the S/W implementation. As the re-segmentation or re-arrangement of the flexible display devices is well-known in the art, these are not described here in more detail.
  • UI user interface
  • FIG. 3 illustrates a schematic block diagram of modules 216 of the mobile device 200 for determining form factor based on current operating state of the user and/or digital state of the mobile device 200, according to an embodiment of the present disclosure.
  • the modules 216 may include an input aggregator module 302, a detection module 304, a form factor prediction module 306, a validation module 308, a rate module 310, an output module 312, and an artificial intelligence (AI) module 318.
  • AI artificial intelligence
  • the input aggregator module 302 may be configured to receive inputs from one or more sensors (e.g., sensors 210) and operating system of the mobile device. Further, in one embodiment, the sensors (e.g., sensors 228) of the connected device(s) may also supplement the inputs required for determining the current operating state of the user. In one embodiment, the sensors of the connected device(s) may include, but not limited to, an accelerometer, gyroscope, heart rate, and illumination sensors. Based on the inputs, the input aggregator module 302 may determine the current operating state of the user and current digital state of the mobile device during operation of the mobile device by the user. The current operating state may include one or more of a physical state, a holding state, and an environmental state of the user.
  • the current digital state of the mobile device may be determined based on a current state of at least one software component of an operating system of the mobile device.
  • the operating system may provide inputs related to application usage, clock, calendars/schedule, network, and battery. Accordingly, the current operating state measures the ease or difficulty of operating the mobile device, while the current digital state measures the interaction requirements of the mobile device.
  • the current operating state and the current digital state may be represented in the form of state vectors. In one embodiment, the overall state may be presented as a single state vector computed based on the current operating state and the current digital state.
  • the current operating state may be utilized for determining a number of touch interactions provided/supported by user to the mobile device at any specific moment. Further, in an embodiment, the current digital state may be used for determining a number of touch interactions needed/demanded by the mobile device at a specific moment.
  • the physical state of the user may be predicted based on inputs from one or more sensors of the mobile device and/or connected device(s), e.g., accelerometer and/or gyroscope.
  • the physical state may indicate a current physically active state or state of motion or posture of the user, such as, walking, sitting, running, standing, climbing stairs, etc.
  • the physical state represents conditions which define the current manner of movement of the user.
  • the sensors of the connected device e.g., sensors 228) may also supplement the inputs required for determining the physical state of the user.
  • the sensors of the connected device may include, but not limited to, an accelerometer, gyroscope, and heart rate sensor.
  • the physical state may be predicted using machine learning or deep learning models associated with human activity recognition principles well known in the art.
  • the output of the input aggregator module 302 for the physical state may be an array of dimensions equal to total supported physical states. For example, if the total supported physical states are nine including walking, resting, climbing up, climbing down, standing, running, resting, bike riding, and dancing; then the output corresponding to the physical state may be a 1x9 dimension vector. Each value in the 1x9 vector, e.g., 0 for true and 1 for false, may indicate the current physical state.
  • the holding state of the user may be predicted using inputs from accelerometer, touch sensors, and/or gyroscope sensors of the mobile device.
  • the sensors of the connected device e.g., sensors 228) may also supplement the inputs required for determining the current operating state of the user.
  • the sensors of the connected device e.g., a smartwatch
  • the sensors of the connected device may include, but not limited to, an accelerometer, gyroscope, and heart rate sensor.
  • the sensors of the connected docking station device may include a docking station sensor.
  • the holding state of the user may indicate a manner of holding the mobile device by the user during the operation of the mobile device.
  • the holding state of the user may indicate whether the user is holding the mobile phone using his left hand, right hand, both hands with portrait orientation, or both hands with landscape orientation. Further, the holding state may indicate whether the user has currently docked the mobile device at a docking station, or whether the mobile device is in a driving mode. In an embodiment, the holding state may be predicted using machine learning or deep learning models associated with operating hand recognition principles well known in the art.
  • the output of the input aggregator module 302 for the holding state may be an array of dimensions equal to total supported holding states.
  • the total supported holding states are ten, including, but not limited to, RHRO (right hand right operate), LHLO (left hand left operate), RHLO (right hand left operate), LHRO (left hand right operate), BHBO (both hand both operate), stylus, mouse, car dock, bike dock;
  • the output corresponding to the holding state may be a 1x10 dimension vector.
  • Each value in the 1x10 vector e.g., 0 for true and 1 for false, may indicate the current holding state.
  • the environmental state of the user may be determined using inputs from the location module 208.
  • the environmental state of the user may indicate at least a location of the user and/or indicate the environment in vicinity of the user, while operating the mobile device 200.
  • the environmental state may indicate whether the user is within his/her home or office, or whether the user is on street or in a park, etc.
  • the environmental state of the user may indicate current temperature, weather, and a time of the day.
  • Such environmental state data may be derived using a software application (not shown) residing inside the mobile device, or using other known mechanisms.
  • the environmental state of the user may be determined using inputs from presence, visual, and/or illumination sensors of one or more connected IoT devices (e.g., device 224) or sensors of the mobile device.
  • the output of the input aggregator module 302 for the environmental state may be an array of dimensions equal to total supported environmental states.
  • an 11-dimensional vector may be used; for weather, a 4-dimensional vector may be used; for outlook, a 4-dimensional vector may be used; for indoor location, a 7-dimensional vector may be used; for hour, a 6-dimensional vector may be used; and for illumination, a 5-dimensional vector may be used.
  • the output corresponding to the environmental state may be a 6x11 dimensional vector.
  • Each value in the 6x11 vector may indicate the current environmental state. It may be apparent to a person skilled in the art that zero padding will be done for those sub-states of environmental states (e.g., weather, outlook, indoor location, hour, and illumination) which have dimensions less than the highest dimension, i.e., 11.
  • the current digital state of the mobile device may be determined based on a current state of at least one software component of an operating system of the mobile device.
  • the input aggregator module 302 may receive one or more inputs from the operating system of the mobile device and determine one or more of, but not limited to, application history such as top or last 5 used mobile applications, mobile applications in foreground, mobile applications in background, work mode/private mode, battery level, network signal level, and actively paired device(s).
  • the current digital state of the mobile device determined by the input aggregator module 302 may include a current digital operation executed on the mobile device.
  • the current digital state may indicate whether the user is messaging, checking notifications, watching video, browsing, or listening songs on a software application.
  • the current digital state of the mobile device may include a current mode of operation of the mobile device, for example, but not limited to, personal mode, work mode, etc.
  • the current digital state of the mobile device represents an overall software state of the mobile device including sub-states of one or more software components, such as applications, network, battery level, and/or connected devices.
  • the input aggregator module 302 may be configured to determine a current vectorized digital state of the mobile device based on the current digital state, which represents the current user activities, such as multi-tasking, video calling, browsing, messaging, etc.
  • the output of the input aggregator module 302 for the current digital state may be an array of dimensions 9x8.
  • the parameters for determining current digital state along with the possible values and/or dimensions are depicted below in Table 1.
  • the output corresponding to the digital state may be a 9x8 dimensional vector.
  • Each value in the 9x8 vector e.g., 0 for true and 1 for false, may indicate the current digital state. It may be apparent to a person skilled in the art that zero padding will be done for those contexts of digital states which have dimensions less than the highest dimension, i.e., 8.
  • the detection module 304 may monitor and detect a change in the current operating state of the user and/or the current digital state of the mobile device.
  • the detection module 304 may detect a change in current physical state of the user from running to sitting.
  • the detection module 304 may detect a change in holding state of the user from one hand to both hands.
  • the detection module 304 may detect a change in environmental state of the user from street to home.
  • the detection module may detect a change in digital state of the user, e.g., an initiation of a video call detected through an application running in foreground.
  • the form factor prediction module 306 may predict an ideal form factor for the mobile device, based on the change detected in the current operating state of the user and/or the current digital state of the mobile device. Referring to the exemplary use case of Figure 1, upon detection of the current operating state of the user from walking on street to walking inside home, the form factor prediction module 306 may predict an ideal form factor as a phone mode from the currently existing band mode. The predicted form factor is based on the detected change in the current operating state, since the change in detected state would indicate that while walking inside home, the user may be able to operate the phone in default phone mode (instead of band mode during walking on street) as firm grip may not be required inside home as compared to street.
  • the form factor prediction module 306 may determine a roll value or fold value indicating the percentage of form factor (i.e., size or shape) required from a current form factor in the current operating and digital states.
  • the roll/fold value may correspond to 125% form factor, i.e., the form factor of the mobile device needs to be modified to 125% of the current form factor, which may correspond to a change from phone to tablet mode.
  • the roll/fold value may be specified with respect to a minimum or maximum size of the flexible display of the mobile device.
  • the form factor prediction module 306 may receive inputs from the AI/training module 318 including transition state based on historical data corresponding to manual transition of form factor performed by users in the past under various operating and digital conditions. In another embodiment, the form factor prediction module 306 may retrieve the instantaneous display size value from the display manager 222 and derive the value of transition in relation to display size (i.e., transition required from instantaneous display size to the predicted form factor). The ratio of the current display size with the maximum display size will provide the percentage of transition value at any moment.
  • the transition state input may be an array of transition values and rates associated with various operating and digital states.
  • the different transition states may include band, phone, phablet, and tablet; while the rate of transition may include three rates including slow, normal and fast.
  • the transition state input may be a 2x4 dimensional vector. Each value in the 2x4 vector, e.g., 0 for true and 1 for false, may indicate the current transition state.
  • the validation module 308 may be configured to validate the predicted form factor by determining whether to accept the predicted form factor based on touch interactions.
  • the touch interactions may include any touch-based interactions of the user with the mobile device such as, but not limited to, taps, swipes, long press, drags, pinch, etc. Additionally, the touch interactions may include the frequency and occurrence rate of such gestures over a period of time.
  • the determination of whether to accept the predicted form factor is based on a number of touch interactions required for the current digital state of the mobile device, and further based on a number of touch interactions supported by the predicted form factor and the current operating state of the user.
  • the validation module 308 is configured to filter low confidence output of the form factor prediction module 306 due to incorrect data input through the input aggregator module 302. In other words, the validation module 308 is configured to validate the form factor because of conflicting input states due to wrong predictions of predictive states (e.g., physical and/or holding state) or by genuine user mistakes.
  • predictive states e.g., physical and/or holding state
  • the validation module 308 may be configured to determine a touch interaction support quotient (TISQ) based on the number of touch interactions supported by the predicted form factor and the current operating state of the user.
  • TISQ corresponds to a level of touch interactivity which certain operating conditions may support at a specific moment. For example, when a user is running and operating a phone with only secondary hand, he/she may not be able to perform pinch gestures, or he/she may not comfortably use the keyboard in this state. So, this particular condition may be presumed to have very low TISQ.
  • the validation module 308 may be configured to determine a touch interaction requirement quotient (TIRQ) based on the number of touch interactions required for the current digital state of the mobile device.
  • TIRQ corresponds to the level of touch interactivity which a particular digital state requires at a specific moment. For example, when a user is performing a video call or playing a video, the amount of touch interaction needed is very less, so the TIRQ will be presumed to be low in this case. However, during instant messaging or mailing, the level of touch interaction required is very high, so the TIRQ will be presumed to be high in this case.
  • the validation module 308 may compare the TISQ and the TIRQ to accept or reject the predicted form factor. In one embodiment, if the TISQ is greater than TIRQ, then the output corresponds to accepting the predicted form factor. If the TISQ is less than the TIRQ, and the difference is very less, then the output corresponds to accepting the predicted form factor. If the TISQ is less than the TIRQ, and the difference is significantly large, then the output corresponds to rejecting the predicted form factor.
  • the rate module 310 may be configured to determine a rate or speed to implement the predicted form factor for the mobile device upon validation or determination to accept the predicted form factor.
  • the transition from the existing form factor to the predicted form factor may be completed in a predefined transition duration.
  • the rate or speed of transition is determined corresponding to the predefined transition duration.
  • the rate module 310 may determine a roll rate to implement the predicted form factor.
  • the roll rate may indicate a rate or speed at which the mobile device is rolled from the existing physical dimensions and display size to the physical size corresponding to the predicted form factor.
  • the rate may correspond to a rate of folding the mobile device to implement the predicted form factor.
  • the rate module 310 may determine the rate of transition from a current form factor to the predicted form factor as one of fast, normal, or slow.
  • Each of these three exemplary rates may correspond to a different speed of performing the transition. For example, a transition from a phone mode of display size 4.5 inches to a tablet mode of display size 7.5 inches, the transition duration may be fixed, and a rate of transition may be determined accordingly.
  • a slow rate of transition may correspond to greater than 2 seconds
  • a medium rate of transition may correspond to 1-2 seconds
  • a fast rate of transition may correspond to less than 1 second.
  • the rate module 310 may be configured to determine the rate based on the current operating state of the user and the current digital state of the mobile device using a machine learning (ML) or neural network-based learning model stored at AI/training module 318. Specifically, the automatic determination of the rate is based on historical data related to a recorded speed at which the user previously performed the manual change of the form factor on the mobile device in similar operating and digital conditions.
  • ML machine learning
  • AI/training module 318 Specifically, the automatic determination of the rate is based on historical data related to a recorded speed at which the user previously performed the manual change of the form factor on the mobile device in similar operating and digital conditions.
  • the value of rate may be specific to each user, since the rate is determined based on the on-device historic learning about the patterns of rolling/unrolling/fold/unfolding of the mobile device by the user manually. For example, when a user on the street opens a messaging application to send a message, he usually roll/unrolls/folds/unfolds the device very quickly. However, when the same user is at home, and he opens a video application, he usually rolls/unroll the device casually and slowly. Accordingly, this behavior of user about the urgency of rolling/unrolling/folding/unfolding is also recorded and automated suggestions for rate of transition are based on such behavioral data.
  • the operating system APIs may be the source of input data.
  • the rate at which the size of display interface changes for content representation may be extracted using the operating system of the mobile device.
  • the rate module 310 may use the above recorded rates for training and rate prediction in future.
  • the output module 312 may be configured to provide the predicted form factor for the mobile device based on the validation to accept the predicted form factor.
  • the output module 312 may provide the predicted form factor to the operating system of the mobile device.
  • the outputting of the predicted form factor corresponds to a recommendation to modify the physical size/dimensions of the mobile device.
  • the operating system may implement the predicted form factor by modifying the physical size/dimensions and display interface size of the mobile device.
  • the AI/training module 318 may include a plurality of neural network layers.
  • neural networks include, but are not limited to, convolutional neural network (CNN), deep neural network (DNN), recurrent neural network (RNN), Restricted Boltzmann Machine (RBM).
  • the learning technique is a method for training a predetermined target device (for example, a robot) using a plurality of learning data to cause, allow, or control the target device to make a determination or prediction.
  • Examples of learning techniques include, but are not limited to, supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning.
  • At least one of a plurality of CNN, DNN, RNN, RMB models and the like may be implemented to thereby achieve execution of the present subject matter's mechanism through an AI model.
  • a function associated with AI may be performed through the non-volatile memory, the volatile memory, and the processor.
  • the processor may include one or a plurality of processors.
  • one or a plurality of processors may be a general-purpose processor, such as a central processing unit (CPU), an application processor (AP), or the like, a graphics-only processing unit such as a graphics processing unit (GPU), a visual processing unit (VPU), and/or an AI-dedicated processor such as a neural processing unit (NPU).
  • the one or a plurality of processors control the processing of the input data in accordance with a predefined operating rule or artificial intelligence (AI) model stored in the non-volatile memory and the volatile memory.
  • the predefined operating rule or artificial intelligence model is provided through training or learning.
  • the AI/training module 318 may be configured to train the form factor prediction module.
  • the training may include detecting a change in at least one of the current operating state of the user and the current digital state of the mobile device. Further, the current operating state of the user and the current digital state of the mobile device may be recorded.
  • the current operating state of the user comprises a form factor state of the mobile device, along with a physical state, a holding state, and an environmental state of the user during operation of the mobile device.
  • the training may further include training a first ML or neural network-based learning model using the physical state, the environmental state, the holding state, and the current digital state as input data and the rolling state as a first training label.
  • the training may include determining an interaction quotient based on a number of touch interactions made by the user in the current digital state. Additionally, the training may include training a second ML or neural network-based learning model using the physical state, the environmental state, the holding state, and the form factor state as input data and the interaction quotient as a second training label. Moreover, the training may include training a third ML or neural network-based learning model based on the current digital state as input data and the interaction quotient as a third training label. Finally, the training may include clustering the interaction quotient into a plurality of classes.
  • Figures 4a and 4b illustrate schematic block diagrams 400a and 400b for predicting form factor without and with validation respectively.
  • the figures illustrate the technical advantages of utilizing validation module before providing the final recommendation of predicted form factor by the form factor prediction module.
  • the form factor prediction module 306 may predict an ideal form factor for the mobile device, based on the changes detected in the current operating state of the user and/or the current digital state of the mobile device.
  • the form factor prediction module 306 may be depicted as a form factor prediction model 402 to predict the ideal form factor for the mobile device.
  • the form factor prediction model 402 may be a machine learning or neural network-based model to predict the ideal form factor.
  • the form factor prediction model 402 may receive inputs related to the current operating state of the user and the digital state (DS) of the mobile device from one or more state provider models (not shown) included within the input aggregator module 302.
  • the current operating state may include physical state (PS), holding state (HS), and environmental state (ES). While the state provider models used for inputs related to DS and ES states are logical, the state provider models for PS and HS may be predictive in nature. As such, the inputs related to PS and HS may include some errors.
  • the PS and HS states are predicted by AI based state provider models, i.e., which will generally have less than 100% accuracy, thereby introducing errors in prediction of ideal form factor.
  • even the DS and ES may include some errors.
  • the user of the mobile device may have opened an application inadvertently, which may introduce errors and reduce accuracy of the form factor prediction model 402.
  • the states predicted by the state provider models may be conflicting in nature.
  • the currently operating state and the digital states may be conflicting with each other at times due to errors introduced while predicting these states and may lead to incorrect prediction of ideal form factor for the mobile device, as depicted in Figure 4a.
  • Figure 4b illustrates a validation model 404 corresponding to the validation module 308 which may be configured to validate the predicted form factor by determining whether to accept the predicted form factor based on touch interactions.
  • the purpose of the validation model 404 is to improve the overall accuracy of the form factor prediction model 402 by filtering out the inaccurate predictions in the form factor value outputted by the form factor prediction model 402 (which may occur due to the incorrect/noisy input data of one or more state providers).
  • the validation model 404 outputs True or False based on the state provider inputs and the predicted form factor value.
  • the validation model 404 uses the concept of touch interactivity, which is explained in conjunction with Figures 5 and 6 to filter the conflicting states.
  • Figure 5a illustrates a detailed block diagram 500a of form factor prediction model and validation model for predicting form factor, according to an embodiment of the present invention.
  • the input aggregation model 502 and state provider models 504 may be included as a part of input aggregation module 302, as previously discussed.
  • the input aggregation block 502 may be included as a part of input aggregation model 302, while the state provider models 504 may be provided as a separate module.
  • the input aggregation model 502 may be configured to receive inputs from one or more sensors (e.g., sensors 210 and/or sensors 228) and an operating system of the mobile device.
  • the input aggregation model may be configured to receive inputs from sensors such as, but not limited to, an accelerometer, gyroscope, heart rate, and illumination sensors.
  • the input received from the sensors and the operating system may include inertial and interaction inputs from mobile device, sensory inputs from one or more connected devices (e.g., 224), usage contextual inputs from the operating system of the mobile device, and environmental inputs.
  • the input aggregation model may include form factor inputs, i.e., the current form factor value for the mobile device.
  • the form factor inputs may be related to whether the mobile device is currently in one of mobile mode, phablet mode, or tablet mode.
  • the form factor inputs may also include historical form factor inputs, which may be associated with historical data points related to appropriate form factor values for different user operating states and digital states. More specifically, the form factor historical inputs may include data related to recorded values of user preferred form factors of the mobile device in various situations.
  • the state provider models 504 may include models for providing physical state, holding state, environmental state, digital state, and form factor. These state provider models 504 may be configured to determine the physical state, holding state, environmental state, digital state, and a current value of form factor, in accordance with various embodiments of the present invention. Additionally, the state provider models may include a state vocabulary model storing definitions associated with operating states, digital states and their associated inputs and outputs, as discussed throughout this disclosure. More specifically, the state vocabulary model is a configuration file associated with meta data related to inputs/outputs of all various modules, as discussed throughout the disclosure.
  • the state provider models 504 may provide physical state, holding state, environmental state, and the digital state as four different inputs to the form factor prediction model 506 for prediction of an ideal form factor value for the mobile device.
  • the predicted ideal form factor value may then be validated using the trained validation module 514.
  • the physical state, holding state, and the environmental state may be provided as an input to the interaction support model (ISM) 508.
  • the ISM 508 may be configured to determine/predict a touch interaction support quotient (TISQ) based on the number of touch interactions supported by the predicted form factor and the current operating state of the user.
  • TISQ corresponds to a level of touch interactivity which certain operating conditions may support at a specific moment. For example, when a user is running and operating a phone with only secondary hand, he/she may not be able to perform pinch gestures, or he/she may not comfortably use the keyboard in this state. So, this particular condition may be presumed to have very low TISQ.
  • the digital state may be provided as an input to an interaction requirement model (IRM) 510.
  • the IRM 510 may be configured to determine a touch interaction requirement quotient (TIRQ) based on the number of touch interactions required for the current digital state of the mobile device.
  • TIRQ corresponds to the level of touch interactivity which a particular digital state requires at a specific moment. For example, when a user is performing a video call or playing a video, the amount of touch interaction needed is very less, so the TIRQ will be presumed to be low in this case. However, during instant messaging or mailing, the level of touch interaction required is very high, so the TIRQ will be presumed to be high in this case.
  • an interaction cluster manager (ICM) 512 may receive the TISQ and the TIRQ to accept or reject the predicted form factor.
  • the ICM 512 is used for clustering of real interaction quotient value outputted by an interaction quotient computer (IQC) 516, as depicted in Figure 5b.
  • the ICM 512 may be trained using an input value "I” (or Interaction Quotient "IQ") from the IQC 516, wherein the input value I or IQ is based on two different sub-input values I1 and I2.
  • the digital state may include 8 different contexts, i.e., background, history, network, security, connected devices, settings, resources, and foreground applications; I1 may be a subset of these 8 digital states and may only include foreground and resources context along with the timestamp of that digital state.
  • the input I2 may be touch gesture data of user interaction in a particular digital state and may include the timestamp of the digital state.
  • the touch gesture data may relate to Tap/Swipe/Long Press/Drag/Double Tap/Multi Touch.
  • the input value I or IQ may be triggered whenever the input I1 changes, i.e., it may be calculated for one particular digital state session.
  • the IQ value corresponds to the input I provided to the ICM 512 during the training phase.
  • the ICM 512 may be configured to cluster the IQ data.
  • the purpose of creating a cluster is to categorize touch interactions in sub-categories.
  • the ICM 512 may create and maintain a 5-class cluster, i.e., very low, low, normal, high, and very high based on the values. These classes may represent the level of touch interactivity in any particular digital state session (based on user's actual touch behavior during that session).
  • the ICM 512 may receive the TISQ and the TIRQ to accept or reject the predicted form factor.
  • the ICM 512 may determine whether the 2 IQs predicted by ISM 508 and IRM 510, i.e., the TIRQ and TISQ belong to same cluster on not. More particularly, if the TISQ is greater than TIRQ, then the output corresponds to accepting the predicted form factor. If the TISQ is less than the TIRQ, then the cluster/classes of both TISQ and the TIRQ are checked. If both the TISQ and TIRQ belong to same cluster, then the predicted form factor value is accepted or else it is rejected.
  • the output of comparison of TISQ and TIRQ corresponds to accepting the predicted form factor if the difference is very less. If the TISQ is less than the TIRQ, and the difference is significantly large (i.e., different clusters), then the output corresponds to rejecting the predicted form factor.
  • Figure 5b illustrates a detailed block diagram 500b of form factor prediction model and the validation model to predict form factor during training phase, according to an embodiment of the present invention.
  • the input aggregation model 502 and state provider models 504 provide similar functionality as previously discussed in conjunction with Figure 5a.
  • the state provider models 504 also provide a current form factor state, which is preferred by user during the current operating state and the current digital state. This current form factor state may be recorded as a form factor training label corresponding to the current operating state and the digital state and may be used during actual prediction of the form factor, as depicted in Figure 5a.
  • the ISM and IRM may be trained with the corresponding training labels under different operating and digital states.
  • Figure 6 illustrates an exemplary process flow 600 for determining a form factor based on current operating state of the user and/or digital state of the mobile device.
  • the method includes receiving inputs from one or more sensors and an operating system of the mobile device. Further, in one embodiment, the sensors of the connected device(s) may also supplement the inputs required for determining the current operating state of the user.
  • the method includes determining a current operating state of the user and a current digital state of the mobile device. Based on the inputs received from sensors and the operating system, the current operating state of the user and the current digital state of the mobile device during operation of the mobile device by the user, may be determined.
  • the current operating state may include one or more of a physical state, a holding state, and an environmental state of the user.
  • the physical state may indicate a current physically active state of the user, such as, walking, sitting, running, standing, climbing stairs, etc. In other words, the physical state represents conditions which define the current manner of movement of the user.
  • the holding state of the user may indicate a manner of holding the mobile device by the user during the operation of the mobile device.
  • the holding state of the user may indicate whether the user is holding the mobile phone using his left hand, right hand, both hands with portrait orientation, or both hands with landscape orientation.
  • the environmental state of the user may indicate at least a location of the user and/or indicate the environment in vicinity of the user, while operating the mobile device 200.
  • the environmental state may indicate whether the user is within his/her home or office, or whether the user is on street or in a park, etc.
  • the environmental state of the user may indicate current temperature, weather, and a time of the day.
  • the current digital state of the mobile device may be determined based on a current state of at least one software component of an operating system of the mobile device.
  • the operating system may provide inputs related to application usage, clock, calendars/schedule, network, and battery. Accordingly, the current operating state measures the ease or difficulty of operating the mobile device, while the current digital state measures the interaction requirements of the mobile device.
  • the method may include determining a current form factor of the mobile device. In other words, it may be determined whether the mobile device is currently being used in phablet mode, phone mode, or tablet mode.
  • the method includes monitoring and detecting a change in at least one of a current operating state of the user and a current digital state of the mobile device.
  • a change may be detected in the current physical state of the user from running to sitting by the mobile device.
  • a change may be detected in holding state of the user from one hand to both hands.
  • a change may be detected in environmental state of the user from street to home.
  • a change may be detected in digital state of the user, e.g., an initiation of a video call detected through an application running in foreground.
  • the method includes predicting a form factor for the mobile device based on the current operating state and the current digital state.
  • the form factor prediction module 306 may determine a roll value or fold value indicating the percentage of form factor (i.e., size or shape) required from a current form factor in the current operating and digital states.
  • the roll/fold value may correspond to 125% form factor, i.e., the form factor of the mobile device needs to be modified to 125% of the current form factor, which may correspond to a change from phone to tablet mode.
  • the method includes determining whether to accept the predicted form factor based on touch interactions.
  • the touch interactions may include any touch-based interactions of the user with the mobile device such as, but not limited to, taps, swipes, long press, drags, pinch, etc. Additionally, the touch interactions may include the frequency and occurrence rate of such gestures over a period of time.
  • the determination of whether to accept the predicted form factor is based on a number of touch interactions required for the current digital state of the mobile device, and further based on a number of touch interactions supported by the predicted form factor and the current operating state of the user. If it is determination to accept the predicted form factor, the method moves to step 612, else the method moves back to step 604.
  • the method includes predicting a rate to transition the mobile device to the predicted form factor from a current form factor.
  • the rate of transition from a current form factor to the predicted form factor may be determined to be one of fast, normal, or slow.
  • Each of these three exemplary rates may correspond to a different speed of performing the transition.
  • the transition rate may indicate a rate or speed at which the mobile device is rolled or folded from the existing physical dimensions and display size to the physical size corresponding to the predicted form factor.
  • the rate of transition may be determined based on the current operating state of the user and the current digital state of the mobile device using a machine learning (ML) or neural network-based learning model. Specifically, the determination of the rate is based on historical data related to a recorded speed at which the user previously performed the manual change of the form factor on the mobile device in similar operating and digital conditions.
  • ML machine learning
  • the method includes providing the predicted form factor and the rate of transition for the mobile device based on a determination to accept the predicted form factor.
  • the predicted form factor and the rate of transition may be provided to the operating system of the mobile device for further processing.
  • the outputting of the predicted form factor corresponds to a recommendation to modify the physical size/dimensions of the mobile device.
  • the operating system may implement the predicted form factor by modifying the physical size/dimensions and display interface size of the mobile device.
  • the method includes re-training or updating the AI model.
  • it may be determined whether the user accepted the form factor changes or manually negated the changes. Based on a determination that the user negated the changes, the AI model may be trained or updated to fix weights in the neural network model.
  • Figure 7 illustrates an exemplary use case of acceptance of a predicted form factor for a mobile device by the validation module, according to an embodiment of the present invention.
  • the initial operating conditions i.e., the current operating state
  • the current operating state may include one or more of a physical state (PS), a holding state (HS), and an environmental state (ES) of the user.
  • PS physical state
  • HS holding state
  • ES environmental state
  • the PS may correspond to the standing user
  • the ES may correspond to "inside kitchen”
  • the HS may correspond to "not held” (kept on kitchen slab).
  • the current digital state (DS) may correspond to "browsing recipe in cooking application”.
  • the initial/current form factor of the mobile device may be determined, i.e., phablet mode.
  • a video call may be received at the mobile device.
  • it may be determined whether the video call is picked.
  • a new form factor may be predicted based on changes in the initial operating state and the digital state at 708.
  • the form factor may be predicted as "Tablet mode" as bigger device size may be better suited when multi window mode is used at home and when the device is kept on the surface.
  • the form factor value prediction may be based on state vectors of current operating state and the digital state.
  • the quantity of touch interactions supported i.e., the TISQ
  • the TISQ may be determined and categorized as "very high” as the user may provide very high touch operability in tablet mode while standing in home and while the phone is kept on surface (not held).
  • the quantity of touch interactions required (the TIRQ) in multi-window mode (i.e., video calling and cooking recipe) may be estimated.
  • the TIRQ may be estimated on the current digital state vectors.
  • the TIRQ may be determined and categorized as "low" as very less touch interactions are required during video call and recipe viewing.
  • the TISQ may be compared with the TIRQ. Since the TISQ is higher than the TIRQ, the prediction may be accepted at 716, else rejected at 718. Thus, the form factor value corresponding to tablet mode will be accepted and provided as an output to the operating system of the mobile device. In response, the mobile device may process the form factor value and initiate automatic change in form factor of the mobile device from phablet mode to the tablet mode.
  • Figure 8 illustrates an exemplary use case of rejection of a predicted form factor for a mobile device by the validation module, according to an embodiment of the present invention.
  • the initial operating conditions i.e., the current operating state
  • the current operating state may include one or more of a physical state (PS), a holding state (HS), and an environmental state (ES) of the user.
  • PS may correspond to the "dancing user”
  • ES may correspond to "at club in dark”
  • HS may correspond to "primary hand”.
  • the current digital state (DS) may correspond to "camera application open in video mode”.
  • the initial/current form factor of the mobile device may be determined, i.e., phone mode.
  • a change in the initial digital state may be detected, i.e., an application related to reading an excel file may be opened.
  • This application may have been opened by the user inadvertently.
  • a new form factor may be predicted based on changes in the initial operating state and the digital state.
  • the form factor may be predicted as "Tablet mode" as bigger device size may be better suited for reading excel files.
  • the form factor value prediction may be based on state vectors of current operating state and the digital state.
  • the quantity of touch interactions supported i.e., the TISQ
  • the TISQ may be determined and categorized as "low" as the user may provide low touch operability even in tablet mode while dancing in club and operating phone with just one hand.
  • the quantity of touch interactions required (the TIRQ) in excel reading and video mode may be estimated.
  • the TIRQ may be estimated on the current digital state vectors.
  • the TIRQ may be determined and categorized as "high" as high touch interactions are required for operating an excel document related application.
  • the TISQ may be compared with the TIRQ. Since the TISQ is lower than the TIRQ, the prediction will be rejected at 814 . Thus, the form factor value corresponding to tablet mode will be rejected and no output recommendation would be provided to the operating system of the mobile device.
  • Figure 9 illustrates another exemplary use case of rejection of a predicted form factor for a mobile device by the validation module, according to an embodiment of the present invention.
  • the initial operating conditions i.e., the current operating state
  • the current operating state may include one or more of a physical state (PS), a holding state (HS), and an environmental state (ES) of the user.
  • PS physical state
  • HS holding state
  • ES environmental state
  • the PS may correspond to the "jogging”
  • the ES may correspond to "in park”
  • the HS may correspond to "primary hand”.
  • the current digital state (DS) may correspond to "locked and screen off”. Additionally, the initial/current form factor of the mobile device may be determined, i.e., band mode.
  • a change in the initial digital state may be detected, i.e., an application related to composing an email function may be opened.
  • This application may have been opened by the user inadvertently.
  • a new form factor may be predicted based on changes in the initial operating state and the digital state.
  • the form factor may be predicted as "Phablet mode" as bigger device size may be better suited for composing emails.
  • the form factor value prediction may be based on state vectors of current operating state and the digital state.
  • the quantity of touch interactions supported i.e., the TISQ
  • the TISQ may be determined and categorized as "very low" as the user may provide low touch operability even in phablet mode while jogging in park and operating phone with just one hand.
  • the quantity of touch interactions required (the TIRQ) in email composing may be estimated.
  • the TIRQ may be estimated on the current digital state vectors.
  • the TIRQ may be determined and categorized as "high" as high touch interactions are required for composing an email.
  • the TISQ may be compared with the TIRQ. Since the TISQ is lower than the TIRQ, the prediction will be rejected at 914 . Thus, the form factor value corresponding to the phablet mode will be rejected and no output recommendation would be provided to the operating system of the mobile device.
  • the form factor will not be modified, thereby eliminating false or inadvertent inputs due to a mistakenly opened application (triggering change in digital state).
  • the present invention facilitates in elimination of false inputs through validation of the predicted form factor value.
  • the present invention indeed provides an automated smart function to trigger form factor modification for mobile devices, the present invention also facilitates in identifying and eliminating inadvertent inputs by users.
  • the proposed solutions in the present disclosure provide intelligent systems and methods for predicting an ideal form factor for the mobile device based on the current operating and digital conditions of user and mobile device respectively.
  • the predicted form factor facilitates automatic adjustment of the form factor for the mobile device without any user intervention or manual commands.
  • the handling and operation of smartphones is made efficient while the user is operating the mobile device in different conditions and for different purposes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Environmental & Geological Engineering (AREA)
  • Probability & Statistics with Applications (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A method of providing a form factor for a mobile device is provided. The method comprises detecting a change in at least one of a current operating state and a current digital state of the mobile device. Further, a form factor is predicted for the mobile device based on the current operating state and the current digital state of the mobile device. A validation is performed on whether to accept the predicted form factor based on a number of touch interactions required for the current digital state of the mobile device and a number of touch interactions supported by the predicted form factor and the current operating state.

Description

METHOD OF RECOMMENDING FORM FACTOR OF ROLLABLE SMARTPHONE
The present invention generally relates to predicting a form factor for a mobile device, and more particularly relates to methods and systems for predicting the form factor for a mobile device based on a current operating state of the user and a current digital state of the mobile device.
With ever increasing usage of digital devices, people need devices for different working conditions. For example, while mobile phones are generally preferred for voice calls and chatting, users prefer phablets or tablets for watching movies. Further, during exercising, users prefer wearable band type devices. Accordingly, users need to buy multiple electronic devices of different sizes for different purposes.
Currently, one solution to address the issue of multiple devices for different purposes is usage of flexible display screens. The flexible display screens are gaining popularity in becoming a mainstream technology, as it is being employed in various devices, such as televisions, wearable devices, smartphones, tablet computers, etc., and even as standalone flexible displays. Further, for mobile phones, rollable smartphones have been becoming available, in which the user may manually adjust/roll the display screens.
An indicative parameter for different size mobile devices is known as a form factor. In particular, the form factor of a mobile phone is its size, shape, and style, as well as the layout and position of its major components. The form factor for a device may also be generally expressed as a device holding size. Currently available form factors may be categorized into the standard examples of phone, phablet, band, or tablet modes. Accordingly, each form factor corresponds to a different size of mobile devices. The manually rollable phones support variable form factors (sizes) due to flexible display screens.
However, the current rollable phones do not include any intelligence to automatically adjust the display screen. The handling and operation of smartphones is hampered when the rolling function is not smart/intelligent.
Accordingly, there is a need for mobile devices including automated flexible display screens, which may predict and adjust the display screen size in a smart manner and without (or with minimal) manual intervention. Further, there is a need for an integrated device which may provide multiple form factor modes for different purposes.
This summary is provided to introduce a selection of concepts, in a simplified format, that are further described in the detailed description of the invention. This summary is neither intended to identify key or essential inventive concepts of the invention and nor is it intended for determining the scope of the invention.
According to one embodiment of the present disclosure, a method of providing a form factor for a mobile device associated with a user is disclosed. The method comprises detecting a change in at least one of a current operating state of the user and a current digital state of the mobile device. Further, the method comprises predicting a form factor for the mobile device based on the current operating state of the user and the current digital state of the mobile device. Furthermore, the method comprises determining whether to accept the predicted form factor based on a number of touch interactions required for the current digital state of the mobile device and a number of touch interactions supported by the predicted form factor and the current operating state of the user. Additionally, the method comprises providing the predicted form factor for the mobile device based on a determination to accept the predicted form factor, wherein the providing the predicted form factor corresponds to a recommendation to modify physical size of the mobile device.
According to another embodiment of the present disclosure, a system of providing a form factor for the mobile device associated with a user is disclosed. The system comprises a detection module configured to detect a change in at least one of a current operating state of the user and a current digital state of the mobile device. Further, the system comprises a form factor prediction module in communication to the detection module and configured to predict a form factor for the mobile device based on the current operating state of the user and the current digital state of the mobile device. Furthermore, the form factor prediction module is configured to determine whether to accept the predicted form factor based on a number of touch interactions required for the current digital state of the mobile device and a number of touch interactions supported by the predicted form factor and the current operating state of the user. Additionally, the form factor prediction module is configured to provide the predicted form factor for the mobile device based on a determination to accept the predicted form factor, wherein the predicted form factor corresponds to a recommendation to modify physical size of the mobile device.
To further clarify the advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail with the accompanying drawings.
These and other features, aspects, and advantages of the present invention will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
Figure 1 illustrates an exemplary use case of modifying a form factor for a mobile device associated with a user, according to an embodiment of the present invention;
Figure 2 illustrates a schematic block diagram of a mobile device for determining form factor based on current operating state of the user and/or digital state of the mobile device, according to an embodiment of the present invention;
Figure 3 illustrates a schematic block diagram of modules of the mobile device for predicting form factor based on current operating state of the user and/or digital state of the mobile device, according to an embodiment of the present invention;
Figures 4a and 4b illustrate schematic block diagrams for predicting form factor without and with validation respectively, according to various embodiments of the present invention;
Figure 5a illustrates a detailed block diagram of form factor prediction model and validation model for predicting form factor, according to an embodiment of the present invention;
Figure 5b illustrates a detailed block diagram of form factor prediction model and the validation model to predict form factor during training phase, according to an embodiment of the present invention;
Figure 6 illustrates an exemplary process flow for determining form factor based on current operating state of the user and/or digital state of the mobile device, according to an embodiment of the present invention.
Figure 7 illustrates an exemplary use case of acceptance of a predicted form factor for a mobile device by the validation module, according to an embodiment of the present invention;
Figure 8 illustrates an exemplary use case of rejection of a predicted form factor for a mobile device by the validation module, according to an embodiment of the present invention; and
Figure 9 illustrates another exemplary use case of rejection of a predicted form factor for a mobile device by the validation module, according to an embodiment of the present invention.
Further, skilled artisans will appreciate that elements in the drawings are illustrated for simplicity and may not have necessarily been drawn to scale. For example, the flow charts illustrate the method in terms of the most prominent steps involved to help to improve understanding of aspects of the present invention. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.
For the purpose of promoting an understanding of the principles of the invention, reference will now be made to the embodiment illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended, such alterations and further modifications in the illustrated system, and such further applications of the principles of the invention as illustrated therein being contemplated as would normally occur to one skilled in the art to which the invention relates. Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skilled in the art to which this invention belongs. The system, methods, and examples provided herein are illustrative only and not intended to be limiting. For example, the term "some" as used herein may be understood as "none" or "one" or "more than one" or "all." Therefore, the terms "none," "one," "more than one," "more than one, but not all" or "all" would fall under the definition of "some." It should be appreciated by a person skilled in the art that the terminology and structure employed herein is for describing, teaching and illuminating some embodiments and their specific features and elements and therefore, should not be construed to limit, restrict or reduce the spirit and scope of the claims or their equivalents in any way.
For example, any terms used herein such as, "includes," "comprises," "has," "consists," and similar grammatical variants do not specify an exact limitation or restriction, and certainly do not exclude the possible addition of one or more features or elements, unless otherwise stated. Further, such terms must not be taken to exclude the possible removal of one or more of the listed features and elements, unless otherwise stated, for example, by using the limiting language including, but not limited to, "must comprise" or "needs to include."
Whether or not a certain feature or element was limited to being used only once, it may still be referred to as "one or more features" or "one or more elements" or "at least one feature" or "at least one element." Furthermore, the use of the terms "one or more" or "at least one" feature or element do not preclude there being none of that feature or element, unless otherwise specified by limiting language including, but not limited to, "there needs to be one or more..." or "one or more element is required."
The term "coupled" or "connected" as used herein refers to any logical, optical, physical or electrical connection, link or the like by which electrical or magnetic signals produced or supplied by one system element are imparted to another coupled or connected element.
Unless otherwise defined, all terms and especially any technical and/or scientific terms, used herein may be taken to have the same meaning as commonly understood by a person ordinarily skilled in the art.
Reference is made herein to some "embodiments." It should be understood that an embodiment is an example of a possible implementation of any features and/or elements presented in the attached claims. Some embodiments have been described for the purpose of explaining one or more of the potential ways in which the specific features and/or elements of the attached claims fulfil the requirements of uniqueness, utility, and non-obviousness.
Use of the phrases and/or terms including, but not limited to, "a first embodiment," "a further embodiment," "an alternate embodiment," "one embodiment," "an embodiment," "multiple embodiments," "some embodiments," "other embodiments," "further embodiment", "furthermore embodiment", "additional embodiment" or other variants thereof do not necessarily refer to the same embodiments. Unless otherwise specified, one or more particular features and/or elements described in connection with one or more embodiments may be found in one embodiment, or may be found in more than one embodiment, or may be found in all embodiments, or may be found in no embodiments. Although one or more features and/or elements may be described herein in the context of only a single embodiment, or in the context of more than one embodiment, or in the context of all embodiments, the features and/or elements may instead be provided separately or in any appropriate combination or not at all. Conversely, any features and/or elements described in the context of separate embodiments may alternatively be realized as existing together in the context of a single embodiment.
Any particular and all details set forth herein are used in the context of some embodiments and therefore should not necessarily be taken as limiting factors to the attached claims. The attached claims and their legal equivalents can be realized in the context of embodiments other than the ones used as illustrative examples in the description below.
Embodiments of the present invention will be described below in detail with reference to the accompanying drawings.
The present disclosure relates to intelligent systems and methods to predict an ideal form factor for a mobile device based on current operating and digital state during operation of the mobile device. The form factor is an indicative parameter for different sizes of mobile devices. In particular, the form factor of the mobile phone is its size, shape, and style, as well as the layout and position of its major components. In one embodiment, the form factor may correspond to a holding size of the mobile device. The form factors may be categorized into phone, phablet, band, or tablet modes. Based on the predicted form factors, the operating system of the mobile may automatically adjust the physical dimension(s) and/or display screen of the mobile device.
Figure 1 illustrates an exemplary use case 100 of modifying a form factor for a mobile device 102 associated with a user 104. The use case 100 depicts a mobile device 102 (represented as 102a, 102b, 102c, and 102d in different form factors) associated with a user 104 (represented as 104a, 104b, 104c, and 104d in different operating conditions). Examples of mobile device 102, may include, a mobile phone, a tablet, a phablet, and a wearable band-sized device. In one embodiment, the mobile device 102 may be an adjustable display device. In particular, the mobile device 102 may be configured to modify its physical display screen size based on modification of at least one physical dimension such as length, width, etc. Further, based on modification in physical dimension(s) of the mobile device 102, the size of user interface or display interface may also be modified. In various embodiments, the mobile device 102 may be a communication device having two or more modes or form factors, wherein each of the modes correspond to the mobile device 102 functioning as a tablet, a mobile phone, a phablet, and a wearable band-sized device, respectively. Each of the modes corresponds to a different form factor for the mobile device 102.
In one embodiment, the mobile device 102 may be configured to determine a current operating state of the user 104 based on one or more of a physical state, a holding state, and an environmental state of the user 104 during operation of the mobile device 102. In the current exemplary use case, the mobile device 102a may determine the physical state of the user 104a, i.e., the user 104a is walking. Further, the mobile device 102a may determine the holding state, i.e., the user 104a is holding the mobile device 102a using one hand. Additionally, the mobile device 102a may determine the environmental state, i.e., the user 104a is currently on a street. The operating state of the user is determined cumulatively based on one or more of these states.
Further, in one embodiment, the mobile device 102 may be configured to determine a current digital state of the user 104 based on state of at least one software component of the operating system of the mobile device 102 during its operation. In the current exemplary use case, the mobile device 102a may determine one or more of, but not limited to, application history such as top or last 5 used mobile applications, mobile applications in foreground, mobile applications in background, work mode/private mode, battery level, network signal level, and actively paired device(s). In the current operating and digital states, the mobile device 102a may be operating in a band mode, i.e., where the display size and physical dimensions of the mobile device may correspond to a wearable band.
Subsequently, the mobile device 102a may detect a change in at least one of the current operating state of the user 104b and the current digital state of the mobile device 102a. For example, the current operating state of the user 104b may include a change in environmental state of the user. Specifically, the user 104 may have moved from street to his home. Based on the detection of change at least in the current operating state, the mobile device 102a may predict a new form factor for the mobile device 102a. In one embodiment, the predicted form factor may correspond to phone mode, i.e., the mobile device 102a in band mode may be converted to physical dimensions and display size of the mobile device 102b in mobile phone mode. In one exemplary embodiment, the different form factor values or modes described in the present invention may correspond to following exemplary values of display sizes or the physical dimensions of the mobile device. For phablet mode, the size may correspond to 5.5 to 7 inches. For phone mode, the size may correspond to 4.5 to 5.5 inches. For band mode, the size may correspond to less than 4.5 inches. For tablet mode, the size may correspond to greater than 7 inches. However, as a skilled person would appreciate, these values are only for exemplary purposes, and may vary. In fact, the number of form factor modes may be categorized into more than above discussed 4 modes, and each mode may have further distinguishing value of display sizes or physical dimensions.
Subsequently, the predicted form factor may be validated based on a number of touch interactions required for the current digital state, and further based on a number of touch interactions supported by the predicted form factor and the current operating state of the user 104b. Upon validation, the predicted form factor may be used by the mobile device 102a to modify its current mode of operation to mobile device 102b. As depicted in Figure 1, the mobile device 102a in band mode may be automatically modified to mobile phone mode 102b in response to the validation.
Similarly, upon detection of further changes in at least one of current operating state and digital state, i.e., user 104 may now be standing (physical state) instead of walking and may be operating the mobile device 102b using both hands (holding state) instead of one hand, and may further change his location from home to street as depicted in 104c. Based on the change in current operating state to 104c, the mobile device 102 may predict that an ideal/appropriate form factor in the current conditions would be phablet mode. Further, based on the validation, the mobile device 102b may automatically modify its mode from mobile phone 102b to phablet mode 102c by changing one or more physical dimensions and display size.
Further, upon change in physical state of the user from 104c to 104d, i.e., from standing to sitting, and change in environmental state from street to home, the ideal form factor may be predicted as tablet mode. Upon validation of the predicted form factor, the mobile device 102c may automatically modify its physical dimensions and display size to tablet mode 102d.
Figure 2 illustrates a schematic block diagram 200a of a mobile device 200 for determining a form factor based on current operating state of the user and/or digital state of the mobile device 200.
In an embodiment, the mobile device 200 may be configured to determine the form factor based on current operating state of the user and/or digital state of the mobile device 200. The mobile device 200 may include at least one processor 202, Input/Output (I/O) interface 204, a display motor 206, location module 208, sensors 210, a transceiver 212, and a memory 214. The memory 214 may further include modules 216, database 218, an operating system 220, and a display manager 222.
In one embodiment, the mobile device 200 may be in communication with a connected device 224 to determine the current operating state of the user and/or the digital state of the mobile device 200. Examples of connected device 224 may include, but not limited to, an Internet of Things (IoT) device, a docking station, smart glasses, and a wearable device such as a smart watch. The connected device 224 may include at least a transceiver 226, sensors 228, processor 230, and a memory 232. For the sake of brevity, only one connected device has been shown in Figure 2, however, there may be more than one connected device 224 present in communication with the mobile device 200. In an alternative embodiment, the mobile device 200 may be configured to determine the current operating state of the user and the digital state of the mobile device 200 independently and without any connected device.
In an embodiment, the mobile device 200 may be an adjustable display device. In particular, the mobile device 200 may be configured to automatically modify its physical display screen size by modifying at least one physical dimension such as length, width, etc. Further, based on modification in physical dimension(s) of the mobile device 200, the size of user interface or display interface may also be modified or vice-versa. In various embodiments, the mobile device 200 may be a communication device having two or more modes, wherein each of the modes correspond to the mobile device 200 functioning as a tablet, a mobile phone, a phablet, and a wearable band-sized device, respectively. Each of the modes correspond to a different form factor for the mobile device 200.
The processors 202 and 230 may each include at least one data processor for executing processes in Virtual Storage Area Network. The processors 202 and 230 may each include specialized processing units such as, integrated system (bus) controllers, memory management control units, floating point units, graphics processing units, digital signal processing units, etc.
The processors 202 and 230 may be disposed in communication with one or more input/output (I/O) devices via the I/O interface 204 and transceiver 226 respectively. The I/O interface 204 may employ communication protocols/methods such as, without limitation, audio, analog, digital, monaural, RCA, stereo, IEEE-1394, serial bus, universal serial bus (USB), infrared, PS/2, BNC, coaxial, component, composite, digital visual interface (DVI), high-definition multimedia interface (HDMI), radio frequency (RF) antennas, S-Video, VGA, IEEE 802.n /b/g/n/x, Bluetooth, cellular (e.g., code-division multiple access (CDMA), high-speed packet access (HSPA+), global system for mobile communications (GSM), long-term evolution (LTE), WiMax, or the like), etc.
Using the I/O interface 204, the mobile device 200 may communicate with one or more I/O devices. For example, the input device may be an antenna, keyboard, mouse, joystick, (infrared) remote control, camera, card reader, fax machine, dongle, biometric reader, microphone, touch screen, touchpad, trackball, stylus, scanner, storage device, transceiver, video device/source, etc. The output devices may be a printer, fax machine, video display (e.g., cathode ray tube (CRT), liquid crystal display (LCD), light-emitting diode (LED), plasma, Plasma Display Panel (PDP), Organic light-emitting diode display (OLED) or the like), audio speaker, etc.
The processor 202 may be disposed in communication with a communication network via a network interface. The network interface may be the I/O interface 204. The network interface may connect to a communication network. The network interface may employ connection protocols including, without limitation, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/internet protocol (TCP/IP), token ring, IEEE 802.11a/b/g/n/x, etc. The communication network may include, without limitation, a direct interconnection, local area network (LAN), wide area network (WAN), wireless network (e.g., using Wireless Application Protocol), the Internet, etc. Using the network interface and the communication network, the mobile device 200 may communicate with the connected device 224. The network interface may employ connection protocols including, but not limited to, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/internet protocol (TCP/IP), token ring, IEEE 802.11a/b/g/n/x, etc. Similarly, the processor 230 of the connected device 224 may communicate with the mobile device 200 using an I/O interface (not shown) and may transmit/receive data using transceiver 226.
In some embodiments, the memory 214 may be communicatively coupled to the at least one processor 202. Similarly, the memory 232 of the connected device 224 may be communicatively coupled to the at least one processor 230. Each of the memory 214 and 232 stores data, instructions executable by the at least one of processors 202 and 230 respectively. The memory 214 may include one or more modules 216 and a database 218 to store data. The one or more modules 216 may be configured to perform the steps of the present disclosure using the data stored in the database 218, to determine an ideal form factor for the mobile device and adjust the physical dimensions of the mobile device 200 based on the determined form factor. In an embodiment, each of the one or more modules 216 may be a hardware unit which may be outside the memory 214.
In one embodiment, the display motor 206 may be configured to automatically modify one or more physical dimensions of the mobile device 200 in response to determination of an ideal form factor for the mobile device 200 based on the current operating and digital states. The display motor 206 may be configured to produce a torque to fold, unfold, roll, or unroll the flexible display of the mobile device from one size to another size. In some embodiments, the display motor 206 may also be coupled with several other components, such as driver and gears to modify the size of the flexible display and physical dimensions of the mobile device. As the display motors and mechanism to adjust the flexible display are well-known in the art, these are not described here in detail.
In one embodiment, the location module 208 may include a global positioning system (GPS). The location module 208 may be configured to provide current location of the user or mobile device 200 to the modules 216. The current location may form a part of environmental state related parameters, while determining the current operating state of the user. In another embodiment, the location module 208 may include an online software application for determining the current location of the user or mobile device 200.
In one embodiment, the sensors 210 may include one or more sensors for determining current operating state of the user, while operating the mobile device 200. Various examples of the sensors 210 may include inertial, touch, pressure, and illumination sensors, for example, but not limited to, an accelerometer, gyroscope, barometer, and touch sensors. The sensors 210 may be configured to provide inputs to the modules 216 to determine the current operating state of the user, which includes one or more of a physical state, a holding state, and an environmental state of the user. Further, the sensors 228 of the connected device 224 may also supplement the inputs required for determining the current operating state of the user. In one embodiment, the sensors 228 may include, but not limited to, an accelerometer, gyroscope, heart rate, presence, visual, illumination, and docking sensors.
In one embodiment, the display manager 222 may be configured to modify the display size of the user interface. The display manager 222 will have instantaneous value of display size available for content display at every step of transition so that the user interface can adjust. For example, when the form factor has to be modified from mobile phone mode to a tablet mode resulting in enlargement of the display of the mobile device 200, the display manager 222 is configured for re-segmentation of the current display. The re-segmentation is performed in a manner such that the change in currently presented display interface on the mobile device 200 is seamless.
In an embodiment where the mobile device is a rollable device, the physical size of rolled value of display at any moment may be provided by the display motor 206 to the hardware interfacing layer, e.g., device hardware drivers. If the value of the physical size of rolled display is known at any moment, then the operating system may render/readjust the user interface (UI) components as per the available display size at disposal. Either the UI components may be resized to fit in the reduced/enlarged size or they may be moved in/out of the UI interface depending on the S/W implementation. As the re-segmentation or re-arrangement of the flexible display devices is well-known in the art, these are not described here in more detail.
Figure 3 illustrates a schematic block diagram of modules 216 of the mobile device 200 for determining form factor based on current operating state of the user and/or digital state of the mobile device 200, according to an embodiment of the present disclosure. The modules 216 may include an input aggregator module 302, a detection module 304, a form factor prediction module 306, a validation module 308, a rate module 310, an output module 312, and an artificial intelligence (AI) module 318.
In an embodiment, the input aggregator module 302 may be configured to receive inputs from one or more sensors (e.g., sensors 210) and operating system of the mobile device. Further, in one embodiment, the sensors (e.g., sensors 228) of the connected device(s) may also supplement the inputs required for determining the current operating state of the user. In one embodiment, the sensors of the connected device(s) may include, but not limited to, an accelerometer, gyroscope, heart rate, and illumination sensors. Based on the inputs, the input aggregator module 302 may determine the current operating state of the user and current digital state of the mobile device during operation of the mobile device by the user. The current operating state may include one or more of a physical state, a holding state, and an environmental state of the user. The current digital state of the mobile device may be determined based on a current state of at least one software component of an operating system of the mobile device. The operating system may provide inputs related to application usage, clock, calendars/schedule, network, and battery. Accordingly, the current operating state measures the ease or difficulty of operating the mobile device, while the current digital state measures the interaction requirements of the mobile device. The current operating state and the current digital state may be represented in the form of state vectors. In one embodiment, the overall state may be presented as a single state vector computed based on the current operating state and the current digital state.
In one embodiment, the current operating state may be utilized for determining a number of touch interactions provided/supported by user to the mobile device at any specific moment. Further, in an embodiment, the current digital state may be used for determining a number of touch interactions needed/demanded by the mobile device at a specific moment.
In an exemplary embodiment, the physical state of the user may be predicted based on inputs from one or more sensors of the mobile device and/or connected device(s), e.g., accelerometer and/or gyroscope. The physical state may indicate a current physically active state or state of motion or posture of the user, such as, walking, sitting, running, standing, climbing stairs, etc. In other words, the physical state represents conditions which define the current manner of movement of the user. Further, the sensors of the connected device (e.g., sensors 228) may also supplement the inputs required for determining the physical state of the user. In one embodiment, the sensors of the connected device may include, but not limited to, an accelerometer, gyroscope, and heart rate sensor. In an embodiment, the physical state may be predicted using machine learning or deep learning models associated with human activity recognition principles well known in the art.
In one exemplary embodiment, the output of the input aggregator module 302 for the physical state may be an array of dimensions equal to total supported physical states. For example, if the total supported physical states are nine including walking, resting, climbing up, climbing down, standing, running, resting, bike riding, and dancing; then the output corresponding to the physical state may be a 1x9 dimension vector. Each value in the 1x9 vector, e.g., 0 for true and 1 for false, may indicate the current physical state.
Further, the holding state of the user may be predicted using inputs from accelerometer, touch sensors, and/or gyroscope sensors of the mobile device. Furthermore, for prediction of the holding state, the sensors of the connected device (e.g., sensors 228) may also supplement the inputs required for determining the current operating state of the user. In one embodiment, the sensors of the connected device (e.g., a smartwatch) may include, but not limited to, an accelerometer, gyroscope, and heart rate sensor. In another embodiment, the sensors of the connected docking station device may include a docking station sensor. The holding state of the user may indicate a manner of holding the mobile device by the user during the operation of the mobile device. For example, the holding state of the user may indicate whether the user is holding the mobile phone using his left hand, right hand, both hands with portrait orientation, or both hands with landscape orientation. Further, the holding state may indicate whether the user has currently docked the mobile device at a docking station, or whether the mobile device is in a driving mode. In an embodiment, the holding state may be predicted using machine learning or deep learning models associated with operating hand recognition principles well known in the art.
In one exemplary embodiment, the output of the input aggregator module 302 for the holding state may be an array of dimensions equal to total supported holding states. For example, if the total supported holding states are ten, including, but not limited to, RHRO (right hand right operate), LHLO (left hand left operate), RHLO (right hand left operate), LHRO (left hand right operate), BHBO (both hand both operate), stylus, mouse, car dock, bike dock; then the output corresponding to the holding state may be a 1x10 dimension vector. Each value in the 1x10 vector, e.g., 0 for true and 1 for false, may indicate the current holding state.
Further, in one embodiment, the environmental state of the user may be determined using inputs from the location module 208. The environmental state of the user may indicate at least a location of the user and/or indicate the environment in vicinity of the user, while operating the mobile device 200. For example, the environmental state may indicate whether the user is within his/her home or office, or whether the user is on street or in a park, etc. Additionally, the environmental state of the user may indicate current temperature, weather, and a time of the day. Such environmental state data may be derived using a software application (not shown) residing inside the mobile device, or using other known mechanisms. In another embodiment, the environmental state of the user may be determined using inputs from presence, visual, and/or illumination sensors of one or more connected IoT devices (e.g., device 224) or sensors of the mobile device.
In one exemplary embodiment, the output of the input aggregator module 302 for the environmental state may be an array of dimensions equal to total supported environmental states. For example, there may be total six supported environmental states including location, weather, outlook, hour, indoor location, and illumination; and for each of these environmental states, there may be a different dimensional vector. For location, an 11-dimensional vector may be used; for weather, a 4-dimensional vector may be used; for outlook, a 4-dimensional vector may be used; for indoor location, a 7-dimensional vector may be used; for hour, a 6-dimensional vector may be used; and for illumination, a 5-dimensional vector may be used. Accordingly, the output corresponding to the environmental state may be a 6x11 dimensional vector. Each value in the 6x11 vector, e.g., 0 for true and 1 for false, may indicate the current environmental state. It may be apparent to a person skilled in the art that zero padding will be done for those sub-states of environmental states (e.g., weather, outlook, indoor location, hour, and illumination) which have dimensions less than the highest dimension, i.e., 11.
Further, in one embodiment, the current digital state of the mobile device may be determined based on a current state of at least one software component of an operating system of the mobile device. For determining the current digital state, the input aggregator module 302 may receive one or more inputs from the operating system of the mobile device and determine one or more of, but not limited to, application history such as top or last 5 used mobile applications, mobile applications in foreground, mobile applications in background, work mode/private mode, battery level, network signal level, and actively paired device(s). In one embodiment, the current digital state of the mobile device determined by the input aggregator module 302 may include a current digital operation executed on the mobile device. For example, the current digital state may indicate whether the user is messaging, checking notifications, watching video, browsing, or listening songs on a software application. In another embodiment, the current digital state of the mobile device may include a current mode of operation of the mobile device, for example, but not limited to, personal mode, work mode, etc. Thus, the current digital state of the mobile device represents an overall software state of the mobile device including sub-states of one or more software components, such as applications, network, battery level, and/or connected devices. In an embodiment, the input aggregator module 302 may be configured to determine a current vectorized digital state of the mobile device based on the current digital state, which represents the current user activities, such as multi-tasking, video calling, browsing, messaging, etc.
In one exemplary embodiment, the output of the input aggregator module 302 for the current digital state may be an array of dimensions 9x8. The parameters for determining current digital state along with the possible values and/or dimensions are depicted below in Table 1.
Digital State Parameter Value Dimensions
Foreground [Application Category 1, Application Category 2] 2-dimensional vector
Background [Application Category 1, Application Category 2] 2-dimensional vector
History [Application Category 1, Application Category 2] 2-dimensional vector
Security Personal Space/Workspace/Secure Space 3-dimensional vector
Network Network State 4-dimensional vector
Battery Battery State 4-dimensional vector
Data Data State 5-dimensional vector
Companions [Array of Connected Devices] 8-dimensional vector
Resources [Array of Resources in Use] 8-dimensional vector
Accordingly, the output corresponding to the digital state may be a 9x8 dimensional vector. Each value in the 9x8 vector, e.g., 0 for true and 1 for false, may indicate the current digital state. It may be apparent to a person skilled in the art that zero padding will be done for those contexts of digital states which have dimensions less than the highest dimension, i.e., 8.
In one embodiment, the detection module 304 may monitor and detect a change in the current operating state of the user and/or the current digital state of the mobile device. In an exemplary embodiment, the detection module 304 may detect a change in current physical state of the user from running to sitting. In another exemplary embodiment, the detection module 304 may detect a change in holding state of the user from one hand to both hands. In yet another exemplary embodiment, the detection module 304 may detect a change in environmental state of the user from street to home. In yet another exemplary embodiment, the detection module may detect a change in digital state of the user, e.g., an initiation of a video call detected through an application running in foreground.
In one embodiment, the form factor prediction module 306 may predict an ideal form factor for the mobile device, based on the change detected in the current operating state of the user and/or the current digital state of the mobile device. Referring to the exemplary use case of Figure 1, upon detection of the current operating state of the user from walking on street to walking inside home, the form factor prediction module 306 may predict an ideal form factor as a phone mode from the currently existing band mode. The predicted form factor is based on the detected change in the current operating state, since the change in detected state would indicate that while walking inside home, the user may be able to operate the phone in default phone mode (instead of band mode during walking on street) as firm grip may not be required inside home as compared to street. In one embodiment, the form factor prediction module 306 may determine a roll value or fold value indicating the percentage of form factor (i.e., size or shape) required from a current form factor in the current operating and digital states. In one exemplary embodiment, the roll/fold value may correspond to 125% form factor, i.e., the form factor of the mobile device needs to be modified to 125% of the current form factor, which may correspond to a change from phone to tablet mode. In another embodiment, the roll/fold value may be specified with respect to a minimum or maximum size of the flexible display of the mobile device.
In one exemplary embodiment, the form factor prediction module 306 may receive inputs from the AI/training module 318 including transition state based on historical data corresponding to manual transition of form factor performed by users in the past under various operating and digital conditions. In another embodiment, the form factor prediction module 306 may retrieve the instantaneous display size value from the display manager 222 and derive the value of transition in relation to display size (i.e., transition required from instantaneous display size to the predicted form factor). The ratio of the current display size with the maximum display size will provide the percentage of transition value at any moment.
In one exemplary embodiment, the transition state input may be an array of transition values and rates associated with various operating and digital states. For example, the different transition states may include band, phone, phablet, and tablet; while the rate of transition may include three rates including slow, normal and fast. Accordingly, the transition state input may be a 2x4 dimensional vector. Each value in the 2x4 vector, e.g., 0 for true and 1 for false, may indicate the current transition state.
In one embodiment, the validation module 308 may be configured to validate the predicted form factor by determining whether to accept the predicted form factor based on touch interactions. The touch interactions may include any touch-based interactions of the user with the mobile device such as, but not limited to, taps, swipes, long press, drags, pinch, etc. Additionally, the touch interactions may include the frequency and occurrence rate of such gestures over a period of time. In particular, the determination of whether to accept the predicted form factor is based on a number of touch interactions required for the current digital state of the mobile device, and further based on a number of touch interactions supported by the predicted form factor and the current operating state of the user. The validation module 308 is configured to filter low confidence output of the form factor prediction module 306 due to incorrect data input through the input aggregator module 302. In other words, the validation module 308 is configured to validate the form factor because of conflicting input states due to wrong predictions of predictive states (e.g., physical and/or holding state) or by genuine user mistakes.
More specifically, the validation module 308 may be configured to determine a touch interaction support quotient (TISQ) based on the number of touch interactions supported by the predicted form factor and the current operating state of the user. The TISQ corresponds to a level of touch interactivity which certain operating conditions may support at a specific moment. For example, when a user is running and operating a phone with only secondary hand, he/she may not be able to perform pinch gestures, or he/she may not comfortably use the keyboard in this state. So, this particular condition may be presumed to have very low TISQ.
Further, the validation module 308 may be configured to determine a touch interaction requirement quotient (TIRQ) based on the number of touch interactions required for the current digital state of the mobile device. The TIRQ corresponds to the level of touch interactivity which a particular digital state requires at a specific moment. For example, when a user is performing a video call or playing a video, the amount of touch interaction needed is very less, so the TIRQ will be presumed to be low in this case. However, during instant messaging or mailing, the level of touch interaction required is very high, so the TIRQ will be presumed to be high in this case.
Subsequently, the validation module 308 may compare the TISQ and the TIRQ to accept or reject the predicted form factor. In one embodiment, if the TISQ is greater than TIRQ, then the output corresponds to accepting the predicted form factor. If the TISQ is less than the TIRQ, and the difference is very less, then the output corresponds to accepting the predicted form factor. If the TISQ is less than the TIRQ, and the difference is significantly large, then the output corresponds to rejecting the predicted form factor.
In one embodiment, the rate module 310 may be configured to determine a rate or speed to implement the predicted form factor for the mobile device upon validation or determination to accept the predicted form factor. To implement the predicted form factor, the transition from the existing form factor to the predicted form factor may be completed in a predefined transition duration. The rate or speed of transition is determined corresponding to the predefined transition duration. In an exemplary embodiment where the mobile device may provide rolling mechanisms to modify the form factor or physical dimensions, the rate module 310 may determine a roll rate to implement the predicted form factor. The roll rate may indicate a rate or speed at which the mobile device is rolled from the existing physical dimensions and display size to the physical size corresponding to the predicted form factor. In an alternative embodiment, the rate may correspond to a rate of folding the mobile device to implement the predicted form factor. In one exemplary embodiment, the rate module 310 may determine the rate of transition from a current form factor to the predicted form factor as one of fast, normal, or slow. Each of these three exemplary rates may correspond to a different speed of performing the transition. For example, a transition from a phone mode of display size 4.5 inches to a tablet mode of display size 7.5 inches, the transition duration may be fixed, and a rate of transition may be determined accordingly. A slow rate of transition may correspond to greater than 2 seconds, a medium rate of transition may correspond to 1-2 seconds, and a fast rate of transition may correspond to less than 1 second.
In one embodiment, the rate module 310 may be configured to determine the rate based on the current operating state of the user and the current digital state of the mobile device using a machine learning (ML) or neural network-based learning model stored at AI/training module 318. Specifically, the automatic determination of the rate is based on historical data related to a recorded speed at which the user previously performed the manual change of the form factor on the mobile device in similar operating and digital conditions.
In one embodiment, the value of rate may be specific to each user, since the rate is determined based on the on-device historic learning about the patterns of rolling/unrolling/fold/unfolding of the mobile device by the user manually. For example, when a user on the street opens a messaging application to send a message, he usually roll/unrolls/folds/unfolds the device very quickly. However, when the same user is at home, and he opens a video application, he usually rolls/unroll the device casually and slowly. Accordingly, this behavior of user about the urgency of rolling/unrolling/folding/unfolding is also recorded and automated suggestions for rate of transition are based on such behavioral data.
Further, to capture the transition duration during the training phase, the operating system APIs may be the source of input data. The rate at which the size of display interface changes for content representation may be extracted using the operating system of the mobile device. The rate module 310 may use the above recorded rates for training and rate prediction in future.
In one embodiment, the output module 312 may be configured to provide the predicted form factor for the mobile device based on the validation to accept the predicted form factor. The output module 312 may provide the predicted form factor to the operating system of the mobile device. The outputting of the predicted form factor corresponds to a recommendation to modify the physical size/dimensions of the mobile device. In response to receiving the predicted form factor, the operating system may implement the predicted form factor by modifying the physical size/dimensions and display interface size of the mobile device.
In an embodiment, the AI/training module 318 may include a plurality of neural network layers. Examples of neural networks include, but are not limited to, convolutional neural network (CNN), deep neural network (DNN), recurrent neural network (RNN), Restricted Boltzmann Machine (RBM). The learning technique is a method for training a predetermined target device (for example, a robot) using a plurality of learning data to cause, allow, or control the target device to make a determination or prediction. Examples of learning techniques include, but are not limited to, supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning. At least one of a plurality of CNN, DNN, RNN, RMB models and the like may be implemented to thereby achieve execution of the present subject matter's mechanism through an AI model. A function associated with AI may be performed through the non-volatile memory, the volatile memory, and the processor. The processor may include one or a plurality of processors. At this time, one or a plurality of processors may be a general-purpose processor, such as a central processing unit (CPU), an application processor (AP), or the like, a graphics-only processing unit such as a graphics processing unit (GPU), a visual processing unit (VPU), and/or an AI-dedicated processor such as a neural processing unit (NPU). The one or a plurality of processors control the processing of the input data in accordance with a predefined operating rule or artificial intelligence (AI) model stored in the non-volatile memory and the volatile memory. The predefined operating rule or artificial intelligence model is provided through training or learning.
In one embodiment, the AI/training module 318 may be configured to train the form factor prediction module. The training may include detecting a change in at least one of the current operating state of the user and the current digital state of the mobile device. Further, the current operating state of the user and the current digital state of the mobile device may be recorded. The current operating state of the user comprises a form factor state of the mobile device, along with a physical state, a holding state, and an environmental state of the user during operation of the mobile device. The training may further include training a first ML or neural network-based learning model using the physical state, the environmental state, the holding state, and the current digital state as input data and the rolling state as a first training label. Further, the training may include determining an interaction quotient based on a number of touch interactions made by the user in the current digital state. Additionally, the training may include training a second ML or neural network-based learning model using the physical state, the environmental state, the holding state, and the form factor state as input data and the interaction quotient as a second training label. Moreover, the training may include training a third ML or neural network-based learning model based on the current digital state as input data and the interaction quotient as a third training label. Finally, the training may include clustering the interaction quotient into a plurality of classes.
Figures 4a and 4b illustrate schematic block diagrams 400a and 400b for predicting form factor without and with validation respectively. In particular, the figures illustrate the technical advantages of utilizing validation module before providing the final recommendation of predicted form factor by the form factor prediction module. As previously discussed, the form factor prediction module 306 may predict an ideal form factor for the mobile device, based on the changes detected in the current operating state of the user and/or the current digital state of the mobile device. In one embodiment, the form factor prediction module 306 may be depicted as a form factor prediction model 402 to predict the ideal form factor for the mobile device. The form factor prediction model 402 may be a machine learning or neural network-based model to predict the ideal form factor.
In one embodiment, the form factor prediction model 402 may receive inputs related to the current operating state of the user and the digital state (DS) of the mobile device from one or more state provider models (not shown) included within the input aggregator module 302. The current operating state may include physical state (PS), holding state (HS), and environmental state (ES). While the state provider models used for inputs related to DS and ES states are logical, the state provider models for PS and HS may be predictive in nature. As such, the inputs related to PS and HS may include some errors. In one embodiment, the PS and HS states are predicted by AI based state provider models, i.e., which will generally have less than 100% accuracy, thereby introducing errors in prediction of ideal form factor. Further, in another embodiment, even the DS and ES may include some errors. For example, the user of the mobile device may have opened an application inadvertently, which may introduce errors and reduce accuracy of the form factor prediction model 402. According to yet another embodiment, the states predicted by the state provider models may be conflicting in nature.
The below Table 2 illustrates conflicting states predicted by the state provider models. It may be assumed that state S1 is a current operating state, while S2 be the DS.
State S1 State S2 Conflict in States S1 and S2
Running Mailing (Computed) User will never write mails while running. In this case running might have been falsely predicted
Driving Both Hands Both Operate Mode User will generally not operate his phone with both hands while driving
Resting Route Navigation (Computed) User will not navigate the route while resting or sleeping
Running File Management (Computed) User will not do file management in smartphone when he is running
Thus, the currently operating state and the digital states may be conflicting with each other at times due to errors introduced while predicting these states and may lead to incorrect prediction of ideal form factor for the mobile device, as depicted in Figure 4a.
Figure 4b illustrates a validation model 404 corresponding to the validation module 308 which may be configured to validate the predicted form factor by determining whether to accept the predicted form factor based on touch interactions. The purpose of the validation model 404 is to improve the overall accuracy of the form factor prediction model 402 by filtering out the inaccurate predictions in the form factor value outputted by the form factor prediction model 402 (which may occur due to the incorrect/noisy input data of one or more state providers). In one embodiment, the validation model 404 outputs True or False based on the state provider inputs and the predicted form factor value. The validation model 404 uses the concept of touch interactivity, which is explained in conjunction with Figures 5 and 6 to filter the conflicting states.
Figure 5a illustrates a detailed block diagram 500a of form factor prediction model and validation model for predicting form factor, according to an embodiment of the present invention. In one embodiment, the input aggregation model 502 and state provider models 504 may be included as a part of input aggregation module 302, as previously discussed. In another embodiment, the input aggregation block 502 may be included as a part of input aggregation model 302, while the state provider models 504 may be provided as a separate module.
As depicted, the input aggregation model 502 may be configured to receive inputs from one or more sensors (e.g., sensors 210 and/or sensors 228) and an operating system of the mobile device. The input aggregation model may be configured to receive inputs from sensors such as, but not limited to, an accelerometer, gyroscope, heart rate, and illumination sensors. In one embodiment, as depicted in Figure 5, the input received from the sensors and the operating system may include inertial and interaction inputs from mobile device, sensory inputs from one or more connected devices (e.g., 224), usage contextual inputs from the operating system of the mobile device, and environmental inputs. Additionally, the input aggregation model may include form factor inputs, i.e., the current form factor value for the mobile device. In other words, the form factor inputs may be related to whether the mobile device is currently in one of mobile mode, phablet mode, or tablet mode. In one embodiment, the form factor inputs may also include historical form factor inputs, which may be associated with historical data points related to appropriate form factor values for different user operating states and digital states. More specifically, the form factor historical inputs may include data related to recorded values of user preferred form factors of the mobile device in various situations.
In one embodiment, the state provider models 504 may include models for providing physical state, holding state, environmental state, digital state, and form factor. These state provider models 504 may be configured to determine the physical state, holding state, environmental state, digital state, and a current value of form factor, in accordance with various embodiments of the present invention. Additionally, the state provider models may include a state vocabulary model storing definitions associated with operating states, digital states and their associated inputs and outputs, as discussed throughout this disclosure. More specifically, the state vocabulary model is a configuration file associated with meta data related to inputs/outputs of all various modules, as discussed throughout the disclosure.
In one embodiment, the state provider models 504 may provide physical state, holding state, environmental state, and the digital state as four different inputs to the form factor prediction model 506 for prediction of an ideal form factor value for the mobile device. The predicted ideal form factor value may then be validated using the trained validation module 514.
For the purpose of validation of the predicted form factor value, the physical state, holding state, and the environmental state (collectively called as "the current operating state" herein) along with the predicted form factor value may be provided as an input to the interaction support model (ISM) 508. The ISM 508 may be configured to determine/predict a touch interaction support quotient (TISQ) based on the number of touch interactions supported by the predicted form factor and the current operating state of the user. The TISQ corresponds to a level of touch interactivity which certain operating conditions may support at a specific moment. For example, when a user is running and operating a phone with only secondary hand, he/she may not be able to perform pinch gestures, or he/she may not comfortably use the keyboard in this state. So, this particular condition may be presumed to have very low TISQ.
Further, for validation, the digital state may be provided as an input to an interaction requirement model (IRM) 510. The IRM 510 may be configured to determine a touch interaction requirement quotient (TIRQ) based on the number of touch interactions required for the current digital state of the mobile device. The TIRQ corresponds to the level of touch interactivity which a particular digital state requires at a specific moment. For example, when a user is performing a video call or playing a video, the amount of touch interaction needed is very less, so the TIRQ will be presumed to be low in this case. However, during instant messaging or mailing, the level of touch interaction required is very high, so the TIRQ will be presumed to be high in this case.
Subsequently, for validation, an interaction cluster manager (ICM) 512 may receive the TISQ and the TIRQ to accept or reject the predicted form factor. Firstly, during the training phase, the ICM 512 is used for clustering of real interaction quotient value outputted by an interaction quotient computer (IQC) 516, as depicted in Figure 5b. The ICM 512 may be trained using an input value "I" (or Interaction Quotient "IQ") from the IQC 516, wherein the input value I or IQ is based on two different sub-input values I1 and I2. In one embodiment, while the digital state may include 8 different contexts, i.e., background, history, network, security, connected devices, settings, resources, and foreground applications; I1 may be a subset of these 8 digital states and may only include foreground and resources context along with the timestamp of that digital state. In one embodiment, the input I2 may be touch gesture data of user interaction in a particular digital state and may include the timestamp of the digital state. For example, the touch gesture data may relate to Tap/Swipe/Long Press/Drag/Double Tap/Multi Touch. In one embodiment, the input value I or IQ may be triggered whenever the input I1 changes, i.e., it may be calculated for one particular digital state session. When input I1 changes, the IQC 516 may transform the value of inputs I1 & I2 and output the average touch down time per second of that digital state session. For example, if a chat messenger session of user lasted for 40000ms and the user touched down on the screen for total time of 4000ms, so the interaction quotient (IQ) = 4000/40000, i.e., 0.1. The IQ value corresponds to the input I provided to the ICM 512 during the training phase.
In one embodiment, the ICM 512 may be configured to cluster the IQ data. The purpose of creating a cluster is to categorize touch interactions in sub-categories. The ICM 512 may create and maintain a 5-class cluster, i.e., very low, low, normal, high, and very high based on the values. These classes may represent the level of touch interactivity in any particular digital state session (based on user's actual touch behavior during that session).
During the validation phase, the ICM 512 may receive the TISQ and the TIRQ to accept or reject the predicted form factor. In particular, the ICM 512 may determine whether the 2 IQs predicted by ISM 508 and IRM 510, i.e., the TIRQ and TISQ belong to same cluster on not. More particularly, if the TISQ is greater than TIRQ, then the output corresponds to accepting the predicted form factor. If the TISQ is less than the TIRQ, then the cluster/classes of both TISQ and the TIRQ are checked. If both the TISQ and TIRQ belong to same cluster, then the predicted form factor value is accepted or else it is rejected. In other words, as the same cluster values are generally closer, the output of comparison of TISQ and TIRQ corresponds to accepting the predicted form factor if the difference is very less. If the TISQ is less than the TIRQ, and the difference is significantly large (i.e., different clusters), then the output corresponds to rejecting the predicted form factor.
Figure 5b illustrates a detailed block diagram 500b of form factor prediction model and the validation model to predict form factor during training phase, according to an embodiment of the present invention. The input aggregation model 502 and state provider models 504 provide similar functionality as previously discussed in conjunction with Figure 5a. In addition, the state provider models 504 also provide a current form factor state, which is preferred by user during the current operating state and the current digital state. This current form factor state may be recorded as a form factor training label corresponding to the current operating state and the digital state and may be used during actual prediction of the form factor, as depicted in Figure 5a. Similarly, the ISM and IRM may be trained with the corresponding training labels under different operating and digital states.
Figure 6 illustrates an exemplary process flow 600 for determining a form factor based on current operating state of the user and/or digital state of the mobile device.
At 602, the method includes receiving inputs from one or more sensors and an operating system of the mobile device. Further, in one embodiment, the sensors of the connected device(s) may also supplement the inputs required for determining the current operating state of the user.
At 604, the method includes determining a current operating state of the user and a current digital state of the mobile device. Based on the inputs received from sensors and the operating system, the current operating state of the user and the current digital state of the mobile device during operation of the mobile device by the user, may be determined. The current operating state may include one or more of a physical state, a holding state, and an environmental state of the user. The physical state may indicate a current physically active state of the user, such as, walking, sitting, running, standing, climbing stairs, etc. In other words, the physical state represents conditions which define the current manner of movement of the user. The holding state of the user may indicate a manner of holding the mobile device by the user during the operation of the mobile device. For example, the holding state of the user may indicate whether the user is holding the mobile phone using his left hand, right hand, both hands with portrait orientation, or both hands with landscape orientation. The environmental state of the user may indicate at least a location of the user and/or indicate the environment in vicinity of the user, while operating the mobile device 200. For example, the environmental state may indicate whether the user is within his/her home or office, or whether the user is on street or in a park, etc. Additionally, the environmental state of the user may indicate current temperature, weather, and a time of the day. The current digital state of the mobile device may be determined based on a current state of at least one software component of an operating system of the mobile device. The operating system may provide inputs related to application usage, clock, calendars/schedule, network, and battery. Accordingly, the current operating state measures the ease or difficulty of operating the mobile device, while the current digital state measures the interaction requirements of the mobile device. In one embodiment, the method may include determining a current form factor of the mobile device. In other words, it may be determined whether the mobile device is currently being used in phablet mode, phone mode, or tablet mode.
At 606, the method includes monitoring and detecting a change in at least one of a current operating state of the user and a current digital state of the mobile device. In an exemplary embodiment, a change may be detected in the current physical state of the user from running to sitting by the mobile device. In another exemplary embodiment, a change may be detected in holding state of the user from one hand to both hands. In yet another exemplary embodiment, a change may be detected in environmental state of the user from street to home. In yet another exemplary embodiment, a change may be detected in digital state of the user, e.g., an initiation of a video call detected through an application running in foreground.
At 608, the method includes predicting a form factor for the mobile device based on the current operating state and the current digital state. In one embodiment, the form factor prediction module 306 may determine a roll value or fold value indicating the percentage of form factor (i.e., size or shape) required from a current form factor in the current operating and digital states. In one exemplary embodiment, the roll/fold value may correspond to 125% form factor, i.e., the form factor of the mobile device needs to be modified to 125% of the current form factor, which may correspond to a change from phone to tablet mode.
At 610, the method includes determining whether to accept the predicted form factor based on touch interactions. The touch interactions may include any touch-based interactions of the user with the mobile device such as, but not limited to, taps, swipes, long press, drags, pinch, etc. Additionally, the touch interactions may include the frequency and occurrence rate of such gestures over a period of time. In particular, the determination of whether to accept the predicted form factor is based on a number of touch interactions required for the current digital state of the mobile device, and further based on a number of touch interactions supported by the predicted form factor and the current operating state of the user. If it is determination to accept the predicted form factor, the method moves to step 612, else the method moves back to step 604.
At 612, the method includes predicting a rate to transition the mobile device to the predicted form factor from a current form factor. The rate of transition from a current form factor to the predicted form factor may be determined to be one of fast, normal, or slow. Each of these three exemplary rates may correspond to a different speed of performing the transition. In one embodiment, the transition rate may indicate a rate or speed at which the mobile device is rolled or folded from the existing physical dimensions and display size to the physical size corresponding to the predicted form factor.
In one embodiment, the rate of transition may be determined based on the current operating state of the user and the current digital state of the mobile device using a machine learning (ML) or neural network-based learning model. Specifically, the determination of the rate is based on historical data related to a recorded speed at which the user previously performed the manual change of the form factor on the mobile device in similar operating and digital conditions.
At 614, the method includes providing the predicted form factor and the rate of transition for the mobile device based on a determination to accept the predicted form factor. The predicted form factor and the rate of transition may be provided to the operating system of the mobile device for further processing. The outputting of the predicted form factor corresponds to a recommendation to modify the physical size/dimensions of the mobile device. In response to receiving the predicted form factor, the operating system may implement the predicted form factor by modifying the physical size/dimensions and display interface size of the mobile device.
At 616, the method includes re-training or updating the AI model. In response to providing the predicted form factor to the operating system, it may be determined whether the user accepted the form factor changes or manually negated the changes. Based on a determination that the user negated the changes, the AI model may be trained or updated to fix weights in the neural network model.
Figure 7 illustrates an exemplary use case of acceptance of a predicted form factor for a mobile device by the validation module, according to an embodiment of the present invention.
At 702, the initial operating conditions (i.e., the current operating state) of the user and the current digital state of the mobile device are determined. The current operating state may include one or more of a physical state (PS), a holding state (HS), and an environmental state (ES) of the user. For example, the PS may correspond to the standing user, the ES may correspond to "inside kitchen", and the HS may correspond to "not held" (kept on kitchen slab). Further, the current digital state (DS) may correspond to "browsing recipe in cooking application". Additionally, the initial/current form factor of the mobile device may be determined, i.e., phablet mode.
At 704, a video call may be received at the mobile device. At 706, it may be determined whether the video call is picked. Based on the determination that the video call is picked and that the user is using multi-window mode for the video call, a new form factor may be predicted based on changes in the initial operating state and the digital state at 708. For example, the form factor may be predicted as "Tablet mode" as bigger device size may be better suited when multi window mode is used at home and when the device is kept on the surface. The form factor value prediction may be based on state vectors of current operating state and the digital state.
At 710, the quantity of touch interactions supported, i.e., the TISQ, for the present operating state and the predicted form factor (tablet mode) may be estimated. In the present embodiment, the TISQ may be determined and categorized as "very high" as the user may provide very high touch operability in tablet mode while standing in home and while the phone is kept on surface (not held).
At 712, the quantity of touch interactions required (the TIRQ) in multi-window mode (i.e., video calling and cooking recipe) may be estimated. The TIRQ may be estimated on the current digital state vectors. In the present embodiment, the TIRQ may be determined and categorized as "low" as very less touch interactions are required during video call and recipe viewing.
At 714, the TISQ may be compared with the TIRQ. Since the TISQ is higher than the TIRQ, the prediction may be accepted at 716, else rejected at 718. Thus, the form factor value corresponding to tablet mode will be accepted and provided as an output to the operating system of the mobile device. In response, the mobile device may process the form factor value and initiate automatic change in form factor of the mobile device from phablet mode to the tablet mode.
Figure 8 illustrates an exemplary use case of rejection of a predicted form factor for a mobile device by the validation module, according to an embodiment of the present invention.
At 802, the initial operating conditions (i.e., the current operating state) of the user and the current digital state of the mobile device are determined. The current operating state may include one or more of a physical state (PS), a holding state (HS), and an environmental state (ES) of the user. For example, the PS may correspond to the "dancing user", the ES may correspond to "at club in dark", and the HS may correspond to "primary hand". Further, the current digital state (DS) may correspond to "camera application open in video mode". Additionally, the initial/current form factor of the mobile device may be determined, i.e., phone mode.
At 804, a change in the initial digital state may be detected, i.e., an application related to reading an excel file may be opened. This application may have been opened by the user inadvertently.
At 806, in response to the predicted change in digital state, a new form factor may be predicted based on changes in the initial operating state and the digital state. For example, the form factor may be predicted as "Tablet mode" as bigger device size may be better suited for reading excel files. The form factor value prediction may be based on state vectors of current operating state and the digital state.
At 808, the quantity of touch interactions supported, i.e., the TISQ, for the present operating state and the predicted form factor (tablet mode) may be estimated. In the present embodiment, the TISQ may be determined and categorized as "low" as the user may provide low touch operability even in tablet mode while dancing in club and operating phone with just one hand.
At 810, the quantity of touch interactions required (the TIRQ) in excel reading and video mode may be estimated. The TIRQ may be estimated on the current digital state vectors. In the present embodiment, the TIRQ may be determined and categorized as "high" as high touch interactions are required for operating an excel document related application.
At 812, the TISQ may be compared with the TIRQ. Since the TISQ is lower than the TIRQ, the prediction will be rejected at 814 . Thus, the form factor value corresponding to tablet mode will be rejected and no output recommendation would be provided to the operating system of the mobile device.
Figure 9 illustrates another exemplary use case of rejection of a predicted form factor for a mobile device by the validation module, according to an embodiment of the present invention.
At 902, the initial operating conditions (i.e., the current operating state) of the user and the current digital state of the mobile device are determined. The current operating state may include one or more of a physical state (PS), a holding state (HS), and an environmental state (ES) of the user. For example, the PS may correspond to the "jogging", the ES may correspond to "in park", and the HS may correspond to "primary hand".
Further, the current digital state (DS) may correspond to "locked and screen off". Additionally, the initial/current form factor of the mobile device may be determined, i.e., band mode.
At 904, a change in the initial digital state may be detected, i.e., an application related to composing an email function may be opened. This application may have been opened by the user inadvertently.
At 906, in response to the predicted change in digital state, a new form factor may be predicted based on changes in the initial operating state and the digital state. For example, the form factor may be predicted as "Phablet mode" as bigger device size may be better suited for composing emails. The form factor value prediction may be based on state vectors of current operating state and the digital state.
At 908, the quantity of touch interactions supported, i.e., the TISQ, for the present operating state and the predicted form factor (phablet mode) may be estimated. In the present embodiment, the TISQ may be determined and categorized as "very low" as the user may provide low touch operability even in phablet mode while jogging in park and operating phone with just one hand.
At 910, the quantity of touch interactions required (the TIRQ) in email composing may be estimated. The TIRQ may be estimated on the current digital state vectors. In the present embodiment, the TIRQ may be determined and categorized as "high" as high touch interactions are required for composing an email.
At 912, the TISQ may be compared with the TIRQ. Since the TISQ is lower than the TIRQ, the prediction will be rejected at 914 . Thus, the form factor value corresponding to the phablet mode will be rejected and no output recommendation would be provided to the operating system of the mobile device.
Thus, the form factor will not be modified, thereby eliminating false or inadvertent inputs due to a mistakenly opened application (triggering change in digital state). In other words, the present invention facilitates in elimination of false inputs through validation of the predicted form factor value. Advantageously, while the present invention indeed provides an automated smart function to trigger form factor modification for mobile devices, the present invention also facilitates in identifying and eliminating inadvertent inputs by users.
Additionally, the proposed solutions in the present disclosure provide intelligent systems and methods for predicting an ideal form factor for the mobile device based on the current operating and digital conditions of user and mobile device respectively. The predicted form factor facilitates automatic adjustment of the form factor for the mobile device without any user intervention or manual commands. Advantageously, the handling and operation of smartphones is made efficient while the user is operating the mobile device in different conditions and for different purposes.
While specific language has been used to describe the present subject matter, any limitations arising on account thereto, are not intended. As would be apparent to a person in the art, various working modifications may be made to the method in order to implement the inventive concept as taught herein. The drawings and the foregoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment.

Claims (14)

  1. A method of providing a form factor for a mobile device, the method comprising:
    detecting a change in at least one of a current operating state and a current digital state of the mobile device;
    predicting a form factor for the mobile device based on the current operating state and the current digital state of the mobile device;
    determining whether to accept the predicted form factor based on a number of touch interactions required for the current digital state of the mobile device and a number of touch interactions supported by the predicted form factor and the current operating state; and
    providing the predicted form factor for the mobile device based on a determination to accept the predicted form factor, wherein the predicted form factor corresponds to a recommendation to modify physical size of the mobile device.
  2. The method of claim 1, wherein the mobile device is a rollable device.
  3. The method of claim 2, comprising:
    determining a roll rate to roll the mobile device based on the determination to accept the predicted form factor, wherein the roll rate is determined based on the current operating state and the current digital state of the mobile device using a machine learning (ML) or neural network-based learning model; and
    providing the roll rate to modify the physical size of the mobile device at a speed of the roll rate.
  4. The method of claim 1, wherein the mobile device is a variable physical size device, and wherein the predicted form factor corresponds to a recommendation to modify at least one physical dimension and a display size of the mobile device.
  5. The method of claim 1, comprising determining the current operating state based on at least one of a physical state, a holding state, and an environmental state of the user during operation of the mobile device.
  6. The method of claim 1, comprising determining the current digital state of the mobile device based on a state of at least one software component of the operating system of the mobile device.
  7. The method of claim 1, wherein the determining whether to accept the predicted form factor comprises:
    determining a touch interaction support quotient based on the number of touch interactions supported by the predicted form factor and the current operating state;
    determining a touch interaction requirement quotient based on the number of touch interactions required for the current digital state of the mobile device; and
    comparing the touch interaction support quotient and the touch interaction requirement to accept or reject the predicted form factor.
  8. An apparatus of providing a form factor for a mobile device, the system comprising:
    transceiver; and
    at least one processor,
    wherein the at least one processor is configured to:
    detect a change in at least one of a current operating state and a current digital state of the mobile device; and
    predict a form factor for the mobile device based on the current operating state and the current digital state of the mobile device,
    determine whether to accept the predicted form factor based on a number of touch interactions required for the current digital state of the mobile device and a number of touch interactions supported by the predicted form factor and the current operating state, and
    provide the predicted form factor for the mobile device based on a determination to accept the predicted form factor, wherein the predicted form factor corresponds to a recommendation to modify physical size of the mobile device.
  9. The apparatus of claim 8, wherein the at least one processor includes a machine learning (ML) or neural network-based learning model trained to provide the form factor for the mobile device in real time.
  10. The apparatus of claim 9, wherein the at least one processor is trained by:
    detecting a change in at least one of the current operating state and the current digital state of the mobile device;
    recording the current operating state and the current digital state of the mobile device, wherein the current operating state comprises a form factor state of the mobile device, and a physical state, a holding state, and an environmental state during operation of the mobile device;
    training a first ML or neural network-based learning model using the physical state, the environmental state, the holding state, and the current digital state as input data and the rolling state as a first training label;
    determining an interaction quotient based on a number of touch interactions in the current digital state;
    training a second ML or neural network-based learning model using the physical state, the environmental state, the holding state, and the rolling state as input data and the interaction quotient as a second training label;
    training a third ML or neural network-based learning model based on the current digital state as input data and the interaction quotient as a third training label; and
    clustering the interaction quotient into a plurality of classes.
  11. The apparatus of claim 8, wherein the at least one processor is further configured to:
    interface with at least one sensor, one or more other connected devices, and an operating system of the mobile device,
    determine the current operating state based on at least one of a physical state, a holding state, and an environmental state during operation of the mobile device, and
    determine the current digital state based on a state of at least one software component of the operating system of the mobile device.
  12. The apparatus of claim 8, wherein the mobile device is a rollable device.
  13. The apparatus of claim 8, wherein the at least one processor is further configured to:
    determine a roll rate to roll the mobile device based on the determination to accept the predicted form factor, wherein the roll rate is determined based on the current operating state and the current digital state of the mobile device using a machine learning (ML) or neural network-based learning model, and
    provide the roll rate to modify the physical size of the mobile device at a speed of the roll rate.
  14. The apparatus of claim 8, wherein the at least one processor is configured to:
    determine a touch interaction support quotient based on the number of touch interactions supported by the predicted form factor and the current operating state,
    determine a touch interaction requirement quotient based on the number of touch interactions required for the current digital state of the mobile device, and
    compare the touch interaction support quotient and the touch interaction requirement to accept or reject the predicted form factor.
PCT/KR2022/015889 2021-10-18 2022-10-18 Method of recommending form factor of rollable smartphone WO2023068773A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN202111047254 2021-10-18
IN202111047254 2021-10-18

Publications (1)

Publication Number Publication Date
WO2023068773A1 true WO2023068773A1 (en) 2023-04-27

Family

ID=86059446

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2022/015889 WO2023068773A1 (en) 2021-10-18 2022-10-18 Method of recommending form factor of rollable smartphone

Country Status (1)

Country Link
WO (1) WO2023068773A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190155492A1 (en) * 2015-04-16 2019-05-23 Samsung Electronics Co., Ltd. Display apparatus and method for displaying
CN111526231A (en) * 2020-05-11 2020-08-11 昆山国显光电有限公司 Polymorphic storage display device and control method thereof
WO2021045276A1 (en) * 2019-09-06 2021-03-11 엘지전자 주식회사 Mobile terminal and control method therefor
WO2021085658A1 (en) * 2019-10-28 2021-05-06 엘지전자 주식회사 Electronic device including display changing in size and control method therefor
US20210157366A1 (en) * 2019-11-22 2021-05-27 Lg Electronics Inc. Electronic apparatus for controlling size of display and control method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190155492A1 (en) * 2015-04-16 2019-05-23 Samsung Electronics Co., Ltd. Display apparatus and method for displaying
WO2021045276A1 (en) * 2019-09-06 2021-03-11 엘지전자 주식회사 Mobile terminal and control method therefor
WO2021085658A1 (en) * 2019-10-28 2021-05-06 엘지전자 주식회사 Electronic device including display changing in size and control method therefor
US20210157366A1 (en) * 2019-11-22 2021-05-27 Lg Electronics Inc. Electronic apparatus for controlling size of display and control method thereof
CN111526231A (en) * 2020-05-11 2020-08-11 昆山国显光电有限公司 Polymorphic storage display device and control method thereof

Similar Documents

Publication Publication Date Title
WO2021083052A1 (en) Object sharing method and electronic device
WO2014003365A1 (en) Method and apparatus for processing multiple inputs
CN109684980B (en) Automatic scoring method and device
WO2014107011A1 (en) Method and mobile device for displaying image
WO2015182964A1 (en) Electronic device with foldable display and method of operating the same
WO2020258929A1 (en) Folder interface switching method and terminal device
WO2019035619A1 (en) Method for displaying content and electronic device thereof
WO2021147785A1 (en) Mind map display method and electronic device
WO2020220991A1 (en) Screen capture method, terminal device and computer-readable storage medium
WO2015046809A1 (en) Method for displaying previews in a widget
US20150019974A1 (en) Information processing device, information processing method, and program
WO2014077530A1 (en) Method for arranging for list in flexible display and electronic device thereof
WO2015099293A1 (en) Device and method for displaying user interface of virtual input device based on motion recognition
WO2021129538A1 (en) Control method and electronic device
WO2020134744A1 (en) Icon moving method and mobile terminal
WO2021088720A1 (en) Information sending method and electronic device
WO2013125914A1 (en) Method and apparatus for object size adjustment on a screen
WO2020017875A1 (en) Electronic apparatus, method for processing image and computer-readable recording medium
WO2020057258A1 (en) Information processing method and terminal
WO2019125060A1 (en) Electronic device for providing telephone number associated information, and operation method therefor
WO2017209568A1 (en) Electronic device and operation method thereof
CN110134471B (en) Screen switching animation control method, terminal and computer readable storage medium
JP7338057B2 (en) Message processing method and electronic device
EP4057137A1 (en) Display control method and terminal device
US20240193831A1 (en) Method, apparatus, and device for processing image, and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22883995

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22883995

Country of ref document: EP

Kind code of ref document: A1