US20210118313A1 - Virtualized Tangible Programming - Google Patents
Virtualized Tangible Programming Download PDFInfo
- Publication number
- US20210118313A1 US20210118313A1 US17/138,651 US202017138651A US2021118313A1 US 20210118313 A1 US20210118313 A1 US 20210118313A1 US 202017138651 A US202017138651 A US 202017138651A US 2021118313 A1 US2021118313 A1 US 2021118313A1
- Authority
- US
- United States
- Prior art keywords
- physical interface
- interface object
- physical
- command
- commands
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 claims abstract description 72
- 238000012545 processing Methods 0.000 claims abstract description 21
- 230000000007 visual effect Effects 0.000 claims description 63
- 230000037081 physical activity Effects 0.000 claims description 52
- 230000005291 magnetic effect Effects 0.000 claims description 29
- 230000007246 mechanism Effects 0.000 claims description 26
- 230000008878 coupling Effects 0.000 claims description 24
- 238000010168 coupling process Methods 0.000 claims description 24
- 238000005859 coupling reaction Methods 0.000 claims description 24
- 238000004891 communication Methods 0.000 claims description 19
- 238000004590 computer program Methods 0.000 claims description 16
- 230000000994 depressogenic effect Effects 0.000 claims description 8
- 230000004044 response Effects 0.000 claims description 7
- 230000007704 transition Effects 0.000 claims description 4
- 230000000694 effects Effects 0.000 description 93
- 238000001514 detection method Methods 0.000 description 53
- 230000009471 action Effects 0.000 description 37
- 230000015654 memory Effects 0.000 description 25
- 230000008859 change Effects 0.000 description 11
- 230000003993 interaction Effects 0.000 description 11
- 230000008569 process Effects 0.000 description 11
- 238000005516 engineering process Methods 0.000 description 10
- 230000003287 optical effect Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 239000011248 coating agent Substances 0.000 description 4
- 238000000576 coating method Methods 0.000 description 4
- 239000003086 colorant Substances 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 239000011521 glass Substances 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 238000007726 management method Methods 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 230000009191 jumping Effects 0.000 description 2
- 230000000670 limiting effect Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000003825 pressing Methods 0.000 description 2
- 230000002829 reductive effect Effects 0.000 description 2
- 239000010409 thin film Substances 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- CWYNVVGOOAEACU-UHFFFAOYSA-N Fe2+ Chemical compound [Fe+2] CWYNVVGOOAEACU-UHFFFAOYSA-N 0.000 description 1
- 235000016623 Fragaria vesca Nutrition 0.000 description 1
- 240000009088 Fragaria x ananassa Species 0.000 description 1
- 235000011363 Fragaria x ananassa Nutrition 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000009118 appropriate response Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000005465 channeling Effects 0.000 description 1
- 239000004927 clay Substances 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013479 data entry Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 239000003302 ferromagnetic material Substances 0.000 description 1
- 239000006260 foam Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000000284 resting effect Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 230000003997 social interaction Effects 0.000 description 1
- 238000012876 topography Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/02—Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/60—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
- A63F13/63—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/60—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
- A63F13/65—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
- A63F13/655—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/90—Constructional details or arrangements of video game devices not provided for in groups A63F13/20 or A63F13/25, e.g. housing, wiring, connections or cabinets
- A63F13/98—Accessories, i.e. detachable arrangements optional for the use of the video game device, e.g. grip supports of game controllers
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/04—Programme control other than numerical control, i.e. in sequence controllers or logic controllers
- G05B19/042—Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
- G05B19/0426—Programming the control sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/002—Specific input/output arrangements not covered by G06F3/01 - G06F3/16
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0354—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/042—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
- G06F3/0425—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/30—Creation or generation of source code
- G06F8/34—Graphical or visual programming
-
- G06K9/00744—
-
- G06K9/4604—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B1/00—Manually or mechanically operated educational appliances using elements forming, or bearing, symbols, signs, pictures, or the like which are arranged or adapted to be arranged in one or more particular ways
- G09B1/32—Manually or mechanically operated educational appliances using elements forming, or bearing, symbols, signs, pictures, or the like which are arranged or adapted to be arranged in one or more particular ways comprising elements to be used without a special support
- G09B1/325—Manually or mechanically operated educational appliances using elements forming, or bearing, symbols, signs, pictures, or the like which are arranged or adapted to be arranged in one or more particular ways comprising elements to be used without a special support the elements comprising interacting electronic components
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B19/00—Teaching not covered by other main groups of this subclass
- G09B19/0053—Computers, e.g. programming
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/20—Pc systems
- G05B2219/23—Pc programming
- G05B2219/23045—Function key changes function as function of program, associated pictogram
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/20—Pc systems
- G05B2219/23—Pc programming
- G05B2219/23157—Display process, synoptic, legend, pictogram, mimic
Definitions
- the present disclosure relates to virtualized tangible programming.
- a tangible user interface is a physical environment that a user can physically interact with to manipulate digital information. While tangible user interfaces have opened up a new range of possibilities for interacting with digital information, significant challenges remain when implementing such an interface. For instance, existing tangible user interfaces generally require expensive, high-quality sensors to digitize user interactions with this environment, which results in systems incorporating these tangible user interfaces being too expensive for most consumers. In addition, these existing systems are often difficult to setup and use, which has led to limited customer use and adoption.
- a system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions.
- One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.
- One general aspect includes a computer-implemented method including: detecting objects in image data; performing comparisons between each of the objects and a predefined set of object definitions; recognizing each of the objects as a visually quantified object or a visually unquantified object based on the comparisons; for each of the objects that is recognized as a visually quantified object, processing a command region and a quantifier region from the object, identifying a corresponding command for the object based on a particular visual attribute of the command region, and identifying a quantifier for the command based on a particular visual attribute of the quantifier region; for each of the objects that is recognized as a visually unquantified object, identifying a corresponding command for the object based on a particular visual attribute of the object; and executing, using a computer processor, a set of commands including the corresponding command for each of the objects detected in the image data.
- Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
- Implementations may include one or more of the following features.
- the computer-implemented method further including: displaying a virtual environment in a user interface, the virtual environment including a target virtual object; and capturing the image data, which depicts a sequence of physical interface objects arranged in a physical environment, where detecting the objects in the image data includes detecting representations of the physical interface objects forming the sequence.
- the computer-implemented method further including: displaying a virtual environment in a user interface, the virtual environment including a target virtual object; visually manipulating the target virtual object in the virtual environment responsive to executing the set of commands.
- the computer-implemented method where executing the set of commands includes.
- the computer-implemented method may also include building an instruction set including the corresponding command for each of the objects detected in the image data.
- the computer-implemented method may also include executing the instruction set using the computer processor.
- the computer-implemented method may also include generating one or more clusters of the objects based on relative positions and relative orientations of the objects.
- the computer-implemented method may also include determining a sequence for the commands of the instruction set based on the one or more clusters.
- the computer-implemented method further including: determining that a candidate object is missing from a candidate location in the one or more clusters based on the relative positions and relative orientations of the objects; and injecting, into the instruction set, a command corresponding to the candidate object at a position corresponding to the candidate location.
- the computer-implemented method where, for each of the objects that is recognized as a visually unquantified object, identifying the corresponding command for the object based on the particular visual attribute of the object includes: identifying an end object for a sequence of the objects detected from the image data; and determining a physical state of the end object from the image data, where executing the set of commands includes determining to execute based on the physical state of the end object detected from the image data.
- a physical object associated with the end object depicted by the image data includes a user-pressable button that changes an aspect of the physical object from a first state to a second state in which the user-pressable button is in a pressed state that is visually perceptible, the image data depicts the end object in the second state, and determining the physical state of the end object includes using blob detection and machine learning to determine the physical state of the end object is a pressed state.
- the computer-implemented method where the end object includes a physical state including one of a pressed state, an unpressed state, a semi-pressed state, and a rubbish state that is indeterminable.
- the computer-implemented method where recognizing each of the objects as a visually quantified object includes performing blob detection to detect a directional region of at least one object of the objects as including a directional indicator, and processing the command region and the quantifier region includes dividing the object into the action region and the quantifier region based on the directional region.
- the particular visual attribute of the command region includes a predetermined color or graphic, and the particular visual attribute of the quantifier region includes a number.
- the computer-implemented method where executing the instruction set further includes: displaying a virtual environment in a user interface, the virtual environment including a target virtual object; determining a path of the target virtual object through a portion of the virtual environment based on the instruction set; and displaying a path projection of the path to a user.
- the computer-implemented method where the command region includes an action region and a direction region and where identifying the quantified command based on the visual attributes of the command region further includes: identifying an action command based on visual attributes of the action region; and identifying a direction command based on visual attributes of the direction region.
- the computer-implemented method where the specific command includes one of a jump command, a move command, and an action command.
- the computer-implemented method where executing the specific command based on the quantifier further includes: repeating for an amount of the quantifier, the executing of one of the jump command, the move command, and the action command.
- the computer-implemented method where executing the specific command based on the quantifier to manipulate the virtual object includes presenting the virtual object moving through a virtual environment based on the specific command.
- the computer-implemented method further including: generating a new virtual object for presentation on the display device based on the specific command.
- the physical interface object further including: a pressable button situated on the top surface, where the pressable button may be interacted with to transition the pressable button between states when the pressable button is depressed.
- the physical interface object where the visual aspects are configured to alter their appearance in response to a pressable button being depressed.
- the physical interface object where the compatible physical interface object includes a second top surface including one or more second visual aspects.
- the physical interface object further including: a dial coupled to the top surface, the dial including one or more visual directional indicator aspects.
- the physical interface object where the dial is a rotatable dial that can be rotated horizontally to the position of the top surface.
- One general aspect includes the computer-implemented method where the candidate object is one of an end object, an event object, and an action object missing from the image data.
- Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
- One general aspect includes a computer-implemented method including: detecting an object from image data; recognizing the object as a numerically quantified object based on a predetermined visual characteristic; processing the recognized object into a command region and a quantifier region; identifying a specific command for manipulating, based on a visual attribute of the command region, a virtual object rendered for display in a virtual environment displayed on a display of the competing device; identifying a quantifier for the specific command based on a visual attribute of the quantifier region; and executing, using a processor of the computing device, the specific command based on the quantifier to manipulate the virtual object.
- Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
- Implementations may include one or more of the following features.
- the computer-implemented method where the specific command includes one of a jump command, a move command, and an action command.
- the computer-implemented method where executing the specific command based on the quantifier further includes: repeating for an amount of the quantifier, the executing of one of the jump command, the move command, and the action command.
- the computer-implemented method where executing the specific command based on the quantifier to manipulate the virtual object includes presenting the virtual object moving through a virtual environment based on the specific command.
- the computer-implemented method further including: generating a new virtual object for presentation on the display device based on the specific command.
- the physical interface object further including: a pressable button situated on the top surface, where the pressable button may be interacted with to transition the pressable button between states when the pressable button is depressed.
- the physical interface object where the visual aspects are configured to alter their appearance in response to a pressable button being depressed.
- the physical interface object where the compatible physical interface object includes a second top surface including one or more second visual aspects.
- the physical interface object further including: a dial coupled to the top surface, the dial including one or more visual directional indicator aspects.
- the physical interface object where the dial is a rotatable dial that can be rotated horizontally to the position of the top surface.
- the physical interface object where the physical interface object includes a command region and the compatible physical interface object includes a quantifier region such that when the physical interface object is coupled with the compatible physical interface object a visually quantified object is formed.
- Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium.
- One general aspect includes a computer-implemented method including: presenting a user interface including a virtual environment and a target object, determining an initial state of the target object in the virtual environment of the user interface, capturing an image of a physical activity surface, processing the image to detect two or more physical interface objects in a specific orientation, comparing the physical interface objects in the specific orientation to a predefined set of instructions, determining a command represented by the physical interface objects in the specific orientation based on the comparison, determining a path through the virtual environment for the target object using the command, and displaying a path projection in the user interface along the path for presentation to a user.
- Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
- One general aspect includes a computer-implemented method including: receiving, from a video capture device, a video stream that includes a physical activity scene of a physical activity surface, proximate to a display device, and one or more physical interface objects placed on the physical activity scene and physically interactable with by a user; processing, using one or more computing devices, the video stream to detect the one or more physical interface objects included in the physical activity scene; recognizing each of the physical interface objects as a visually quantified object or a visually unquantified object based on the comparisons; for each of the physical interface objects that is recognized as a visually quantified object, processing a command region and a quantifier region from the object, identifying a corresponding command for the physical interface object based on a particular visual attribute of the command region, and identifying a quantifier for the command based on a particular visual attribute of the quantifier region; for each of the physical interface objects that is recognized as a visually unquantified object, identifying a corresponding command for the physical interface object based on a particular visual attribute of
- One general aspect includes a visual tangible programming system including: a video capture device coupled for communication with a computing device, the video capture device being adapted to capture a video stream that includes a physical activity scene adjacent to the computing device; a detector, coupled to the computing device, the detector being adapted to detect within the video stream a sequence of physical interface objects in the physical activity scene; a processor of the computing device, the processor being adapted to compare the sequence of physical interface objects to a predefined set of object definitions and recognize visually quantified objects and visually unquantified objects based on the comparison, and execute a set of commands based the visually quantified objects and visually unquantified objects; and a display coupled to the computing device, the display being adapted to display an interface that includes a virtual scene and update the virtual scene based on the executed set of commands.
- Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
- One general aspect includes a physical interface object for constructing a computer program in a physical space including: a housing including a top surface, a lower side surface, an upper side surface, a left side surface, a right side surface, and a bottom surface; the top surface including one or more visual aspects; one or more of the lower side surface, the upper side surface, the left side surface, and the right side surface including one or more magnetic fasteners configured to couple to a corresponding side surface of a compatible physical interface object; lower side surface, the upper side surface, the left side surface, and the right side surface including an alignment mechanism for coupling to a compatible alignment mechanism of a compatible physical interface object.
- Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
- Implementations may include one or more of the following features.
- the physical interface object further including: a pressable button situated on the top surface, where the pressable button may be interacted with to transition the pressable button between states when the pressable button is depressed.
- the physical interface object where the visual aspects are configured to alter their appearance in response to a pressable button being depressed.
- the physical interface object where the compatible physical interface object includes a second top surface including one or more second visual aspects.
- the physical interface object further including: a dial coupled to the top surface, the dial including one or more visual directional indicator aspects.
- the physical interface object where the dial is a rotatable dial that can be rotated horizontally to the position of the top surface.
- the physical interface object where the physical interface object includes a command region and the compatible physical interface object includes a quantifier region such that when the physical interface object is coupled with the compatible physical interface object a visually quantified object is formed.
- Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium.
- FIG. 1 is a graphical representation illustrating an example configuration for virtualized tangible programming.
- FIG. 2 is a block diagram illustrating an example computer system for virtualized tangible programming.
- FIG. 3 is a block diagram illustrating an example computing device.
- FIGS. 4A-4D are graphical representations illustrating example physical interface objects.
- FIG. 5 is a graphical representation illustrating an example sequence of physical interface objects.
- FIG. 6 is a flowchart of an example method for virtualized tangible programming.
- FIG. 7 is a flowchart of an example method for virtualized tangible programming.
- FIGS. 8A-8E are graphical representations illustrating example interfaces for virtualized tangible programming.
- FIGS. 9A-13H portray various views of example physical interface objects.
- the technology described herein provides a platform for a real time, tangible programming environment.
- the programming environment is intuitive and allows users to understand how to construct programs without prior training. For example, a user may create a sequence of physical interface objects and cause a virtual scene to change based on executed commands that correspond to the sequence of physical interface objects.
- FIG. 1 is a graphical representation illustrating an example configuration 100 of a system for virtualized tangible programming.
- the configuration 100 may be used for various activities in the physical activity scene 116 .
- the configuration 100 includes, in part, a tangible, physical activity surface 102 on which physical interface object(s) 120 may be placed (e.g., drawn, created, molded, built, projected, etc.) and a computing device 104 that is equipped or otherwise coupled to a video capture device 110 configured to capture video of the activity surface 102 .
- the physical interface object(s) 120 may be arranged in the physical activity scene 116 in a collection, which may form a computer program (e.g., a sequence of programming instructions/commands).
- the various components that make up the physical interface object(s) 120 may be coupled (e.g., mated, aligned, slid in and out) together in different combinations and/or or physically manipulated (e.g., rotated, pressed, switched, etc.).
- verb tiles may be joined with other verb tiles
- unit tiles and/or adverb tiles may be joined with verb tiles (e.g., like puzzle pieces.)
- directional dials may be rotated, etc.
- the computing device 104 includes novel software and/or hardware capable of executing commands to manipulate a target virtual object 122 based on the physical interface object(s) 120 .
- the activity surface 102 is depicted as substantially horizontal in FIG. 1 , it should be understood that the activity surface 102 can be vertical or positioned at any other angle suitable to the user for interaction.
- the activity surface 102 can have any color, pattern, texture, and topography.
- the activity surface 102 can be substantially flat or be disjointed/discontinuous in nature.
- Non-limiting examples of an activity surface 102 include a table, desk, counter, ground, a wall, a whiteboard, a chalkboard, a customized surface, etc.
- the activity surface 102 may additionally or alternatively include a medium on which the user may render physical interface object(s) 120 , such as paper, canvas, fabric, clay, foam, or other suitable medium.
- the activity surface 102 may be preconfigured for certain activities.
- an example configuration may include an activity surface 102 that includes a physical activity scene 116 , such as a whiteboard or drawing board.
- the physical activity scene 116 may be integrated with the stand 106 or may be distinct from the stand 106 but placeable adjacent to the stand 106 .
- the physical activity scene 116 can indicate to the user the boundaries of the activity surface 102 that is within the field of view of the video capture device 110 .
- the physical activity scene 116 may be a board, such as a chalkboard or whiteboard, separate from the activity surface 102 .
- the size of the interactive area on the physical activity scene 116 may be bounded by the field of view of the video capture device 110 and can be adapted by an adapter 108 and/or by adjusting the position of the video capture device 110 .
- the physical activity scene 116 may be a light projection (e.g., pattern, context, shapes, etc.) projected onto the activity surface 102 .
- the computing device 104 included in the example configuration 100 may be situated on the physical activity surface 102 or otherwise proximate to the physical activity surface 102 .
- the computing device 104 can provide the user(s) with a virtual portal for viewing a virtual scene 118 .
- the computing device 104 may be placed on a table in front of a user so the user can easily see the computing device 104 while interacting with physical interface object(s) 120 on the physical activity surface 102 .
- Example computing devices 104 may include, but are not limited to, mobile phones (e.g., feature phones, smart phones, etc.), tablets, laptops, desktops, netbooks, TVs, set-top boxes, media streaming devices, portable media players, navigation devices, personal digital assistants, etc.
- the computing device 104 includes or is otherwise coupled (e.g., via a wireless or wired connection) to a video capture device 110 (also referred to herein as a camera) for capturing a video stream of the activity surface 102 .
- a video capture device 110 also referred to herein as a camera
- the video capture device 110 may be a front-facing camera that is equipped with an adapter 108 that adapts the field of view of the camera 110 to include, at least in part, the activity surface 102 .
- the physical activity scene 116 of the activity surface 102 captured by the video capture device 110 is also interchangeably referred to herein as the activity surface or the activity scene in some implementations.
- the computing device 104 and/or the video capture device 110 may be positioned and/or supported by a stand 106 .
- the stand 106 may position the display 112 of the video capture device 110 in a position that is optimal for viewing and interaction by the user who is simultaneously interacting with the physical environment (physical activity scene 116 ).
- the stand 106 may be configured to rest on the activity surface 102 and receive and sturdily hold the computing device 104 so the computing device 104 remains still during use.
- the adapter 108 adapts a video capture device 110 (e.g., front-facing, rear-facing camera) of the computing device 104 to capture substantially only the physical activity scene 116 , although numerous further implementations are also possible and contemplated.
- the camera adapter 108 can split the field of view of the front-facing camera into two scenes.
- the video capture device 110 captures a physical activity scene 116 that includes a portion of the activity surface 102 and is able to capture physical interface object(s) 120 in either portion of the physical activity scene 116 .
- the camera adapter 108 can redirect a rear-facing camera of the computing device (not shown) toward a front-side of the computing device 104 to capture the physical activity scene 116 of the activity surface 102 located in front of the computing device 104 .
- the adapter 108 can define one or more sides of the scene being captured (e.g., top, left, right, with bottom open).
- the adapter 108 and stand 106 for a computing device 104 may include a slot for retaining (e.g., receiving, securing, gripping, etc.) an edge of the computing device 104 to cover at least a portion of the camera 110 .
- the adapter 108 may include at least one optical element (e.g., a mirror) to direct the field of view of the camera 110 toward the activity surface 102 .
- the computing device 104 may be placed in and received by a compatibly sized slot formed in a top side of the stand 106 .
- the slot may extend at least partially downward into a main body of the stand 106 at an angle so that when the computing device 104 is secured in the slot, it is angled back for convenient viewing and utilization by its user or users.
- the stand 106 may include a channel formed perpendicular to and intersecting with the slot.
- the channel may be configured to receive and secure the adapter 108 when not in use.
- the adapter 108 may have a tapered shape that is compatible with and configured to be easily placeable in the channel of the stand 106 .
- the channel may magnetically secure the adapter 108 in place to prevent the adapter 108 from being easily jarred out of the channel.
- the stand 106 may be elongated along a horizontal axis to prevent the computing device 104 from tipping over when resting on a substantially horizontal activity surface (e.g., a table).
- the stand 106 may include channeling for a cable that plugs into the computing device 104 .
- the cable may be configured to provide power to the computing device 104 and/or may serve as a communication link to other computing devices, such as a laptop or other personal computer.
- the adapter 108 may include one or more optical elements, such as mirrors and/or lenses, to adapt the standard field of view of the video capture device 110 .
- the adapter 108 may include one or more mirrors and lenses to redirect and/or modify the light being reflected from activity surface 102 into the video capture device 110 .
- the adapter 108 may include a mirror angled to redirect the light reflected from the activity surface 102 in front of the computing device 104 into a front-facing camera of the computing device 104 .
- many wireless handheld devices include a front-facing camera with a fixed line of sight with respect to the display 112 including a virtual scene 118 .
- the adapter 108 can be detachably connected to the device over the camera 110 to augment the line of sight of the camera 110 so it can capture the activity surface 102 (e.g., surface of a table).
- the mirrors and/or lenses in some implementations can be polished or laser quality glass.
- the mirrors and/or lenses may include a first surface that is a reflective element.
- the first surface can be a coating/thin film capable of redirecting light without having to pass through the glass of a mirror and/or lens.
- a first surface of the mirrors and/or lenses may be a coating/thin film and a second surface may be a reflective element.
- the lights passes through the coating twice, however since the coating is extremely thin relative to the glass, the distortive effect is reduced in comparison to a conventional mirror. This reduces the distortive effect of a conventional mirror in a cost effective way.
- the adapter 108 may include a series of optical elements (e.g., mirrors) that wrap light reflected off of the activity surface 102 located in front of the computing device 104 into a rear-facing camera of the computing device 104 so it can be captured.
- the adapter 108 could also adapt a portion of the field of view of the video capture device 110 (e.g., the front-facing camera) and leave a remaining portion of the field of view unaltered so that multiple scenes may be captured by the video capture device 110 as shown in FIG. 1 .
- the adapter 108 could also include optical element(s) that are configured to provide different effects, such as enabling the video capture device 110 to capture a greater portion of the activity surface 102 .
- the adapter 108 may include a convex mirror that provides a fisheye effect to capture a larger portion of the activity surface 102 than would otherwise be capturable by a standard configuration of the video capture device 110 .
- the video capture device 110 could, in some implementations, be an independent unit that is distinct from the computing device 104 and may be positionable to capture the activity surface 102 or may be adapted by the adapter 108 to capture the activity surface 102 as discussed above. In these implementations, the video capture device 110 may be communicatively coupled via a wired or wireless connection to the computing device 104 to provide it with the video stream being captured.
- the physical interface object(s) 120 in some implementations may be tangible objects that a user may interact with in the physical activity scene 116 .
- the physical interface object(s) 120 in some implementations may be programming blocks that depict various programming actions and functions. A user may arrange a sequence of the programming blocks representing different actions and functions on the physical activity scene 116 and the computing device 104 may process the sequence to determine a series of commands to execute in the virtual scene 118 .
- the virtual scene 118 in some implementations may be a graphical interface displayed on a display of the computing device 104 .
- the virtual scene 118 may be setup to display prompts and actions to a user to assist in organizing the physical interface object(s) 120 .
- the virtual scene may include a target virtual object 122 , depicted in FIG. 1 as an animated character.
- the user may create a series of commands using the physical interface object(s) 120 to control various actions of the target virtual object 122 , such as making the target virtual object 122 move around the virtual scene 118 , interact with an additional virtual object 124 , perform a repeated action, etc.
- FIG. 2 is a block diagram illustrating an example computer system 200 for virtualized tangible programming.
- the illustrated system 200 includes computing devices 104 a . . . 104 n (also referred to individually and collectively as 104 ) and servers 202 a . . . 202 n (also referred to individually and collectively as 202 ), which are communicatively coupled via a network 206 for interaction with one another.
- the computing devices 104 a . . . 104 n may be respectively coupled to the network 206 via signal lines 208 a . . . 208 n and may be accessed by users 222 a . . . 222 n (also referred to individually and collectively as 222 ).
- the servers 202 a . . . 202 n may be coupled to the network 206 via signal lines 204 a . . . 204 n , respectively.
- the use of the nomenclature “a” and “n” in the reference numbers indicates that any number of those elements having that nomenclature may be included in the system 200 .
- the network 206 may include any number of networks and/or network types.
- the network 206 may include, but is not limited to, one or more local area networks (LANs), wide area networks (WANs) (e.g., the Internet), virtual private networks (VPNs), mobile (cellular) networks, wireless wide area network (WWANs), WiMAX® networks, Bluetooth® communication networks, peer-to-peer networks, other interconnected data paths across which multiple devices may communicate, various combinations thereof, etc.
- LANs local area networks
- WANs wide area networks
- VPNs virtual private networks
- WWANs wireless wide area network
- WiMAX® networks WiMAX® networks
- Bluetooth® communication networks peer-to-peer networks, other interconnected data paths across which multiple devices may communicate, various combinations thereof, etc.
- the computing devices 104 a . . . 104 n are computing devices having data processing and communication capabilities.
- a computing device 104 may include a processor (e.g., virtual, physical, etc.), a memory, a power source, a network interface, and/or other software and/or hardware components, such as front and/or rear facing cameras, display, graphics processor, wireless transceivers, keyboard, camera, sensors, firmware, operating systems, drivers, various physical connection interfaces (e.g., USB, HDMI, etc.).
- the 104 n may couple to and communicate with one another and the other entities of the system 200 via the network 206 using a wireless and/or wired connection. While two or more computing devices 104 are depicted in FIG. 2 , the system 200 may include any number of computing devices 104 . In addition, the computing devices 104 a . . . 104 n may be the same or different types of computing devices.
- one or more of the computing devices 104 a . . . 104 n may include a camera 110 , a detection engine 212 , and activity application(s) 214 .
- One or more of the computing devices 104 and/or cameras 110 may also be equipped with an adapter 108 as discussed elsewhere herein.
- the detection engine 212 is capable of detecting and/or recognizing physical interface object(s) 120 located in/on the physical activity scene 116 (e.g., on the activity surface 102 within field of view of camera 110 ).
- the detection engine 212 can detect the position and orientation of the physical interface object(s) 120 in physical space, detect how the physical interface object(s) 120 are being manipulated by the user 222 , and cooperate with the activity application(s) 214 to provide users 222 with a rich virtual experience by executing commands in the virtual scene 118 based on the physical interface object(s) 120 .
- the detection engine 212 processes video captured by a camera 110 to detect physical interface object(s) 120 .
- the activity application(s) 214 are capable of executing a series of commands in the virtual scene 118 based on the detected physical interface object(s) 120 . Additional structure and functionality of the computing devices 104 are described in further detail below with reference to at least FIG. 3 .
- the servers 202 may each include one or more computing devices having data processing, storing, and communication capabilities.
- the servers 202 may include one or more hardware servers, server arrays, storage devices and/or systems, etc., and/or may be centralized or distributed/cloud-based.
- the servers 202 may include one or more virtual servers, which operate in a host server environment and access the physical hardware of the host server including, for example, a processor, memory, storage, network interfaces, etc., via an abstraction layer (e.g., a virtual machine manager).
- an abstraction layer e.g., a virtual machine manager
- the servers 202 may include software applications operable by one or more computer processors of the servers 202 to provide various computing functionalities, services, and/or resources, and to send data to and receive data from the computing devices 104 .
- the software applications may provide functionality for internet searching; social networking; web-based email; blogging; micro-blogging; photo management; video, music and multimedia hosting, distribution, and sharing; business services; news and media distribution; user account management; or any combination of the foregoing services.
- the servers 202 are not limited to providing the above-noted services and may include other network-accessible services.
- system 200 illustrated in FIG. 2 is provided by way of example, and that a variety of different system environments and configurations are contemplated and are within the scope of the present disclosure. For instance, various functionality may be moved from a server to a client, or vice versa and some implementations may include additional or fewer computing devices, services, and/or networks, and may implement various functionality client or server-side. Further, various entities of the system 200 may be integrated into a single computing device or system or additional computing devices or systems, etc.
- FIG. 3 is a block diagram of an example computing device 104 .
- the computing device 104 may include a processor 312 , memory 314 , communication unit 316 , display 112 , camera 110 , and an input device 318 , which are communicatively coupled by a communications bus 308 .
- the computing device 104 is not limited to such and may include other elements, including, for example, those discussed with reference to the computing devices 104 in FIGS. 1 and 2 .
- the processor 312 may execute software instructions by performing various input/output, logical, and/or mathematical operations.
- the processor 312 has various computing architectures to process data signals including, for example, a complex instruction set computer (CISC) architecture, a reduced instruction set computer (RISC) architecture, and/or an architecture implementing a combination of instruction sets.
- CISC complex instruction set computer
- RISC reduced instruction set computer
- the processor 312 may be physical and/or virtual, and may include a single core or plurality of processing units and/or cores.
- the memory 314 is a non-transitory computer-readable medium that is configured to store and provide access to data to the other elements of the computing device 104 .
- the memory 314 may store instructions and/or data that may be executed by the processor 312 .
- the memory 314 may store the detection engine 212 , the activity application(s) 214 , and the camera driver 306 .
- the memory 314 is also capable of storing other instructions and data, including, for example, an operating system, hardware drivers, other software applications, data, etc.
- the memory 314 may be coupled to the bus 308 for communication with the processor 312 and the other elements of the computing device 104 .
- the communication unit 316 may include one or more interface devices (I/F) for wired and/or wireless connectivity with the network 206 and/or other devices.
- the communication unit 316 may include transceivers for sending and receiving wireless signals.
- the communication unit 316 may include radio transceivers for communication with the network 206 and for communication with nearby devices using close-proximity (e.g., Bluetooth®, NFC, etc.) connectivity.
- the communication unit 316 may include ports for wired connectivity with other devices.
- the communication unit 316 may include a CAT-5 interface, ThunderboltTM interface, FireWireTM interface, USB interface, etc.
- the display 112 may display electronic images and data output by the computing device 104 for presentation to a user 222 .
- the display 112 may include any conventional display device, monitor or screen, including, for example, an organic light-emitting diode (OLED) display, a liquid crystal display (LCD), etc.
- the display 112 may be a touch-screen display capable of receiving input from one or more fingers of a user 222 .
- the display 112 may be a capacitive touch-screen display capable of detecting and interpreting multiple points of contact with the display surface.
- the computing device 104 may include a graphics adapter (not shown) for rendering and outputting the images and data for presentation on display 112 .
- the graphics adapter (not shown) may be a separate processing device including a separate processor and memory (not shown) or may be integrated with the processor 312 and memory 314 .
- the input device 318 may include any device for inputting information into the computing device 104 .
- the input device 318 may include one or more peripheral devices.
- the input device 318 may include a keyboard (e.g., a QWERTY keyboard), a pointing device (e.g., a mouse or touchpad), microphone, a camera, etc.
- the input device 318 may include a touch-screen display capable of receiving input from the one or more fingers of the user 222 .
- the functionality of the input device 318 and the display 112 may be integrated, and a user 222 of the computing device 104 may interact with the computing device 104 by contacting a surface of the display 112 using one or more fingers.
- the user 222 could interact with an emulated (i.e., virtual or soft) keyboard displayed on the touch-screen display 112 by using fingers to contact the display 112 in the keyboard regions.
- the detection engine 212 may include a detector 304 .
- the elements 212 and 304 may be communicatively coupled by the bus 308 and/or the processor 312 to one another and/or the other elements 214 , 306 , 310 , 314 , 316 , 318 , 112 , and/or 110 of the computing device 104 .
- one or more of the elements 212 and 304 are sets of instructions executable by the processor 312 to provide their functionality.
- one or more of the elements 212 and 304 are stored in the memory 314 of the computing device 104 and are accessible and executable by the processor 312 to provide their functionality. In any of the foregoing implementations, these components 212 , and 304 may be adapted for cooperation and communication with the processor 312 and other elements of the computing device 104 .
- the detector 304 includes software and/or logic for processing the video stream captured by the camera 110 to detect physical interface object(s) 120 included in the video stream. In some implementations, the detector 304 may identify line segments related to physical interface object(s) 120 included in the physical activity scene 116 . In some implementations, the detector 304 may be coupled to and receive the video stream from the camera 110 , the camera driver 306 , and/or the memory 314 .
- the detector 304 may process the images of the video stream to determine positional information for the line segments related to the physical interface object(s) 120 in the activity scene 116 (e.g., location and/or orientation of the line segments in 2D or 3D space) and then analyze characteristics of the line segments included in the video stream to determine the identities and/or additional attributes of the line segments.
- positional information for the line segments related to the physical interface object(s) 120 in the activity scene 116 e.g., location and/or orientation of the line segments in 2D or 3D space
- the detector 304 may recognize the line by identifying its contours.
- the detector 304 may also identify various attributes of the line, such as colors, contrasting colors, depth, texture, etc.
- the detector 304 may use the description of the line and the lines attributes to identify the physical interface object(s) 120 by comparing the description and attributes to a database of objects and identifying the closest matches.
- the detector 304 may be coupled to the storage 310 via the bus 308 to store, retrieve, and otherwise manipulate data stored therein. For example, the detector 304 may query the storage 310 for data matching any line segments that it has determined are present in the physical activity scene 116 . In all of the above descriptions, the detector 304 may send the detected images to the detection engine 212 and the detection engine 212 may perform the above described features.
- the detector 304 may be able to process the video stream to detect sequences of physical interface object(s) 120 on the physical activity scene 116 .
- the detector 304 may be configured to understand relational aspects between the physical interface object(s) 120 and determine a sequence, interaction, change, etc. based on the relational aspects.
- the detector 304 may be configured to identify an interaction related to one or more physical interface object(s) 120 present in the physical activity scene 116 and the activity application(s) 214 may execute a series of commands based on the relational aspects between the one or more physical interface object(s) 120 and the interaction.
- the interaction may be pressing a button incorporated into a physical interface object(s) 120 .
- the activity application(s) 214 include software and/or logic for receiving a sequence of physical interface object(s) 120 and identifying corresponding commands that can be executed in the virtual scene 118 .
- the activity application(s) 214 may be coupled to the detector 304 via the processor 312 and/or the bus 308 to receive the detected physical interface object(s) 120 .
- a user 222 may arrange a sequence of physical interface object(s) 120 on the physical activity scene 116 .
- the detection engine 212 may then notify the activity application(s) 214 that a user has pressed an “execution block” in the sequence of the physical interface object(s) 120 , causing the activity application(s) 214 to execute a set of commands associated with each of the physical interface object(s) 120 and manipulate the target virtual object 122 (e.g., move, remove, adjust, modify, etc., the target virtual object 122 and/or other objects and/or parameters in the virtual scene).
- a set of commands associated with each of the physical interface object(s) 120 e.g., move, remove, adjust, modify, etc., the target virtual object 122 and/or other objects and/or parameters in the virtual scene.
- the activity application(s) 214 may determine the set of commands by searching through a database of commands that are compatible with the attributes of the detected physical interface object(s) 120 .
- the activity application(s) 214 may access a database of commands stored in the storage 310 of the computing device 104 .
- the activity application(s) 214 may access a server 202 to search for commands.
- a user 222 may predefine a set of commands to include in the database of commands. For example, a user 222 can predefine that an interaction with a specific physical interface object 120 included in the physical activity scene 116 to prompt the activity application(s) 214 to execute a predefined set of commands based on the interaction.
- the activity application(s) 214 may enhance the virtual scene 118 and/or the target virtual object 122 as part of the executed set of commands.
- the activity application(s) 214 may display visual enhancements as part of executing the set of commands.
- the visual enhancements may include adding color, extra virtualizations, background scenery, etc.
- the visual enhancements may include having the target virtual object 122 move or interact with another virtualization ( 124 ) in the virtual scene 118 .
- the manipulation of the physical interface object(s) 120 by the user 222 in the physical activity scene 116 may be incrementally presented in the virtual scene 118 as the user 222 manipulates the physical interface object(s) 120 , an example of which is shown in FIG. 9 .
- the activity applications 214 may include video games, learning applications, assistive applications, storyboard applications, collaborative applications, productivity applications, etc.
- the camera driver 306 includes software storable in the memory 314 and operable by the processor 312 to control/operate the camera 110 .
- the camera driver 306 is a software driver executable by the processor 312 for signaling the camera 110 to capture and provide a video stream and/or still image, etc.
- the camera driver 306 is capable of controlling various features of the camera 110 (e.g., flash, aperture, exposure, focal length, etc.).
- the camera driver 306 may be communicatively coupled to the camera 110 and the other components of the computing device 104 via the bus 308 , and these components may interface with the camera driver 306 via the bus 308 to capture video and/or still images using the camera 110 .
- the camera 110 is a video capture device configured to capture video of at least the activity surface 102 .
- the camera 110 may be coupled to the bus 308 for communication and interaction with the other elements of the computing device 104 .
- the camera 110 may include a lens for gathering and focusing light, a photo sensor including pixel regions for capturing the focused light and a processor for generating image data based on signals provided by the pixel regions.
- the photo sensor may be any type of photo sensor including a charge-coupled device (CCD), a complementary metal-oxide-semiconductor (CMOS) sensor, a hybrid CCD/CMOS device, etc.
- CMOS complementary metal-oxide-semiconductor
- the camera 110 may also include any conventional features such as a flash, a zoom lens, etc.
- the camera 110 may include a microphone (not shown) for capturing sound or may be coupled to a microphone included in another component of the computing device 104 and/or coupled directly to the bus 308 .
- the processor of the camera 110 may be coupled via the bus 308 to store video and/or still image data in the memory 314 and/or provide the video and/or still image data to other elements of the computing device 104 , such as the detection engine 212 and/or activity application(s) 214 .
- the storage 310 is an information source for storing and providing access to stored data, such as a database of commands, user profile information, community developed commands, virtual enhancements, etc., object data, calibration data, and/or any other information generated, stored, and/or retrieved by the activity application(s) 214 .
- the storage 310 may be included in the memory 314 or another storage device coupled to the bus 308 .
- the storage 310 may be or included in a distributed data store, such as a cloud-based computing and/or data storage system.
- the storage 310 may include a database management system (DBMS).
- DBMS database management system
- the DBMS could be a structured query language (SQL) DBMS.
- storage 310 may store data in an object-based data store or multi-dimensional tables comprised of rows and columns, and may manipulate, i.e., insert, query, update, and/or delete, data entries stored in the verification data store 106 using programmatic operations (e.g., SQL queries and statements or a similar database manipulation library). Additional characteristics, structure, acts, and functionality of the storage 310 is discussed elsewhere herein.
- FIG. 4A is a graphical representation 400 illustrating an example physical interface object 120 .
- the example physical interface object 120 may include two different regions, a command region 402 and a quantifier region 404 .
- the command region 402 and the quantifier region 404 may be different regions of the same (e.g., a single) physical interface object 120 , while in further implementations, the command region 402 and quantifier region 404 may be separable objects (e.g., tiles (also called blocks)) that can be coupled together to form a coupled command region 402 and quantifier region 404 .
- the quantifier regions 404 may represent various numbers and different quantifier regions may be coupled with different command regions 402 to form various programming commands.
- the command region 402 may represent various actions, such as walking, jumping, interacting, etc.
- the command region 402 may correspond to the set of commands that causes the target virtual object 122 to perform the action depicted on the command region 402 .
- the quantifier region 404 may act as a multiplier to the command region 402 and may correspond to a multiplying effect for the amount of times the set of commands are executed by the activity application(s) 214 , causing the target virtual object 122 to perform the action the number of times represented by the quantifier region 404 .
- the command region 402 may represent the action to move and the quantifier region 404 may include the quantity “2”, causing the activity application(s) 214 to execute a set of commands causing the target virtual object 122 to move two tiles.
- a command region 402 that does not include a quantifier region 404 may cause the activity application(s) 214 to execute a set of commands a single time, (or any other default alternative when quantifier region 404 may not be detected.)
- the physical interface object(s) 120 may include a directional region 406 .
- the directional region 406 may correspond to a set of commands representing a direction for an action represented in the command region 402 .
- the directional region 406 may be represented as an arrow and the direction of the arrow may represent a corresponding direction for a set of commands.
- a directional command may be represented by the directional region 406 .
- the directional command may be able to point in any direction, including up, down, left, and/or right.
- the directional region 406 may be a dial that a user can rotate to point in different directions.
- the dial may be integrated into the physical interface object(s) 120 or the dial may be separable and may be configured to couple with the physical interface object(s) 120 to allow a user to rotate the dial.
- the directional region 406 may be rotatable, allowing a user to manipulate the directional region 406 to point in a variety of different directions.
- the detection engine 212 may be configured to identify the direction region 406 and use the direction region 406 to divide the physical interface object(s) 120 into the quantifier region 404 and the command region 402 .
- the physical interface object(s) 120 may be magnetic and may configured to magnetically fasten to adjacent objects.
- a given programming tile may include tile magnetic fasteners 408 and/or region magnetic fasteners 410 .
- the tile magnetic fasteners 408 may be present on a top side and/or a bottom side of the physical interface object(s) 120 and allow a physical interface object(s) 120 to magnetically couple with other objects, such as additional physical interface object(s) 120 , boundaries of the physical activity scene 116 , etc.
- the tile magnetic fasteners 408 may magnetically couple with additional tile magnetic fasteners (not shown) on other physical interface object(s) 120 .
- the objects being magnetically coupled with the physical interface object(s) 120 may include a ferromagnetic material that magnetically couples with the tile magnetic fasteners 408 .
- the physical interface object(s) 120 may include two tile magnetic fasteners 408 a / 408 c on a top side and/or two tile magnetic fasteners 408 b / 408 d on a bottom side. While in further implementations, other quantities of tile magnetic fasteners 408 are contemplated, such as a single tile magnetic fasteners 408 .
- a given programming tile may include the region magnetic fasteners 410 on the left and/or right side of the programming tile that allow the programming tile to magnetically couple with an adjacent tile as depicted in FIG. 4A where the command region 402 may be magnetically coupled by the region magnetic fasteners 410 to the quantifier region 404 .
- a magnetic fastener include a magnet, a ferrous material, etc.
- Detachably fastening the physical interface object(s) 120 is advantageous as it allows a user to conveniently arrange a collection of objects in a logical form, drag a collection of fastened objects around the physical activity scene 116 without the collection falling apart and quickly manipulate physical interface objects(s) 120 by allowing the collection of fastened objects to quickly and neatly be assembled, etc.
- physical interface object(s) 120 may include one or more alignment mechanisms to align the physical interface object(s) 120 with other physical interface object(s) 120 (e.g., vertically horizontally, etc.).
- a first physical interface object 120 may include a protrusion 411 on a bottom side which may be configured to mate with a recess (not shown for a following physical interface object 120 , but may be similar to a recess 409 of the first physical interface object 120 ) of a following physical interface object 120 on a top side, and so on and so forth, although it should be understood that other suitable alignment mechanisms are also possible and contemplated (e.g., flat surfaces that are magnetically alignable, other compatible edge profiles (e.g., wavy surfaces, jagged surfaces, puzzle-piece shaped edges, other compatibly shaped protrusion(s) and/or recesses, other suitable fasteners (e.g., snaps, hooks, hook/repeat, etc.).
- additional and/or alternative alignment mechanisms to align the physical
- the detection engine 212 may classify regions using machine learning models and/or one or more visual attributes of the regions (e.g., color, graphics, number, etc.) into commands and quantifiers. This allows the detection engine 212 to determine the actions, directionality, and/or numbers for the detected physical interface object(s) 120 .
- FIG. 4B is a graphical representation 412 illustrating example physical interface object(s) 120 represented as various programming tiles 414 - 430 .
- the programming tiles may include verb tiles that represent various commands and other command tiles, adverb tiles that modify the verb tile, and/or units of measurements or quantities.
- Programming tile 414 may represent a repeat command.
- the activity application(s) 214 may associate the programming tile 414 with a repeat command that causes a sequence of commands to be repeated.
- the repeat command may be represented in some implementations by two arrows forming a circular design on the programming tile 414 .
- the programming tile 414 may be coupled with a quantifier region 404 causing the repeat command to be executed a number of times represented by the quantifier region 404 .
- Programming tile 416 may represent a verb tile depicting a walk command that causes the activity application(s) 214 to cause a target virtual object 122 to move.
- the walk command may be represented in some implementations by an image of a character moving on the programing tile 416 .
- the programming tile 416 may be coupled with a quantifier region 404 causing the walk command to be executed a number of times represented by the quantifier region 404 .
- Programming tile 418 may represent a verb tile depicting a jump command that causes the activity application(s) 214 to cause a target virtual object 122 to jump.
- the jump command may be represented in some implementations by an image of a character jumping on the programing tile 418 .
- the programming tile 418 may be coupled with a quantifier region 404 causing the jump command to be executed a number of times represented by the quantifier region 404 .
- Programming tile 420 may represent a verb tile depicting a tool command that causes the activity application(s) 214 to cause a target virtual object 122 to interact with something in the virtual scene 118 and/or perform an action.
- the tool command may be represented in some implementations by an image of a hand on the programing tile 420 .
- the programming tile 420 may be coupled with a quantifier region 404 causing the tool command to be executed a number of times represented by the quantifier region 404 .
- Programming tile 422 may represent a verb tile depicting a magic command that causes the activity application(s) 214 to cause a target virtual object 122 to perform a predefined command associated with the magic command.
- the magic command may be one example of an event command, while additional events may be included other than the magic command, such as a celebration event, a planting event, an attack event, a flashlight event, a tornado event, etc.
- the magic command may be represented in some implementations by an image of stars on the programing tile 422 .
- the programming tile 422 may be coupled with a quantifier region 404 causing the magic command to be executed a number of times represented by the quantifier region 404 .
- Programming tile 424 may represent a verb tile depicting a direction command that causes the activity application(s) 214 to perform a command in a specific direction in the virtual scene 118 .
- the direction command may be represented in some implementations by an image of an arrow on the programing tile 424 .
- the programming tile 424 may be coupled with a command region 402 causing the command to be executed in a specific direction.
- Programming tile 426 may represent a tile depicting an if command that causes the detection engine 212 to detect a specific situation and when the situation is present to perform a separate set of commands as indicated by the if command.
- the if command may be represented in some implementations by an exclamation point on the programing tile 416 .
- the programming tile 426 may allow if/then instances to be programmed into a sequence of physical interface object(s) 120 .
- the detection engine 212 may be configured to detect clusters of tiles separated by an if command, as described in more detail with reference to FIG. 5 .
- Programming tiles 430 may represent examples of quantifier regions 404 depicting various numerical values.
- the quantifier regions 404 may be coupled with other programming tiles to alter the amount of times a command may be executed.
- Programming tile 428 may represent an execution block that causes the activity application(s) 214 to execute the current sequence of physical interface object(s) 120 .
- the execution block may have one or more states.
- the detection engine 212 may be configured to determine the state of the execution block, and cause the activity application(s) 214 to execute the set of commands in response to detecting a change in the state. For example, one state may be a pressed-state and another state may be an unpressed-state. In the unpressed-state, the detection engine 212 may detect a visual indicator 432 that may optionally be included on the execution block. When a user interacts with the execution block, the visual indicator 432 may change causing the detection engine 212 to detect the pressed-state. For example, when a user pushes a button on the execution block, it may cause the visual indicator 432 (shown as slots) to change colors, disappear, etc. prompting the activity application(s) 214 to execute the set of commands.
- the execution block can additional or alternatively have a semi-pressed state, in which a user may be interacting with the execution block, but has not yet fully transitioned between a pressed-state and an unpressed-state.
- the execution block may further include a rubbish state, in which the detection engine 212 may be unable to determine a state of the execution block and various parameters may be programmed for this state, such as waiting until a specific state change has been detected, inferring based on the arrangement of other physical interface object(s) 120 a reasonable state, etc.
- FIG. 4C is a side view of a set 434 of physical interface object(s) 120 a - 120 c and FIG. 4D is a side view of a stack 436 of the set of physical interface object(s) 120 a - 120 c .
- a user may stack and/or unstack the physical interface object(s) 120 a - 120 c for convenient storage and manipulation by nesting the physical interface object(s) together via compatible coupling portions.
- each physical interface object(s) 120 may include compatible receiving portions 440 and engaging portions 438 .
- the engaging portion 438 of a physical interface object 120 may be configured to engage with the receiving portion of an adjacently situated physical interface object 120 as shown in FIG. 4D allowing the physical interface object(s) 120 to stack in a flush manner, with no protrusions or gaps between the stacked physical interface object(s) 120 .
- a parameter adjustment mechanism such as one including a direction region 406 , may form the engaging portion 438 and may be configured to engage with a correspondingly sized receiving portion 440 , such as a recess, as shown in the illustrated embodiment, although it should be understood that the engaging portion 438 and receiving portions 440 may be discrete members of the physical interface object(s) 120 . More particularly, the engaging portion 438 may include the parameter adjustment mechanism forming a protrusion protruding outwardly from a front surface of the physical interface object(s) 120 .
- Each physical interface object(s) 120 a - 120 c depicted in the representation 434 includes a corresponding receiving portion 440 that may include a recess formed in a bottom surface of the physical interface object(s) 120 .
- the recess may be configured to receive the protrusion, allowing the protrusion to nest within the recess of the physical interface object(s) 120 when the physical interface object(s) 120 are stacked as shown in FIG. 4D .
- the physical interface object(s) 120 a - 120 ( c ) may magnetically couple when stacked.
- the magnetically coupling may occur based on top magnetic fasteners 442 and bottom magnetic fasteners 444 of adjacent physical interface object(s) 120 coupling together when the physical interface object(s) 120 a - 102 c are in a stacked position as shown in FIG. 4D .
- FIG. 5 may be a graphical representation 500 representing an example sequence of physical interface object(s) 120 .
- the detection engine 212 may detect a sequence 502 of physical interface object(s) 120 .
- the detection engine 212 may detect a sequence as including at least one command tile and an execution block.
- the sequence 502 includes multiple command tiles, quantifier tiles, and an execution block coupled together and representing a specific set of commands.
- other sequences of physical interface object(s) 120 are contemplated.
- the detection engine 212 may be configured to identify separate clusters that are portions of the sequence, for example, cluster 504 may represent a separate set of commands to perform in response to an if tile indicating an interrupt and may cause the activity application(s) 214 to halt execution of a previous section of the sequence in order to execute the new cluster 504 when conditions for the interrupt are satisfied.
- the detection engine 212 may be configured to determine various clusters and subroutines related to those clusters.
- the detection engine 212 may determine statistically likely locations for certain physical interface object(s) 120 based on the clustering. For example, two or more clusters may be represented by two branches of a sequence in the physical activity scene, and based on the clusters; the detection engine 212 may determine two possible positions for an end object (e.g., play button.)
- the activity application(s) 214 may be configured to inject a candidate into the set of commands based on the possible positions of the object.
- the detection engine 212 may identify likely candidates for a missing physical interface object(s) 120 and the activity application(s) 214 may inject the likely candidate into the set of commands at the candidate location (e.g., the portion of the set of commands determined to be missing.) In further implementations, if the detection engine 212 detects that the sequence of physical interface object(s) 120 exceed a boundary of the physical activity scene 116 , then the detection engine 212 may use statistical probabilities of likely locations for an execution block and execute the commands associated with the detected physical interface object(s) 120 .
- the detection engine 212 may determine if there are missing object candidates, determine approximate candidates, and populate the positions of the missing object candidates with the approximations. For example, in some cases, an end object (e.g., play button) at the end of a string of objects may go undetected, and the detection engine 212 may automatically determine the absence of that object from likely positions, and add it as a candidate to those positions.
- an end object e.g., play button
- FIG. 6 is a flowchart of an example method 600 for virtualized tangible programming.
- the detecting engine 212 may detect an object included in image data received from the video capture device 110 .
- the detection engine detects the objects by analyzing specific frames of a video file from the image data and performs object and line recognition to categorize the detected physical interface object(s) 120 .
- the detection engine 212 may perform a comparison between each of the detected physical interface object(s) 120 and a predefined set of object definitions. For example, the detection engine 212 may compare identified graphical attributes to identify various portions of programming tiles, such as those described in FIG. 4B .
- the detection engine 212 may identify a color of a physical interface object(s) 120 and/or other physical, detectable physical attribute(s) of the physical interface object(s) 120 (e.g., texture, profile, etc.), and identify the object based on the physical attribute(s) (e.g., color).
- the detection engine 212 may recognize one of more of the physical interface object(s) 120 as a visually quantified object and/or a visually unquantified object based on the comparisons.
- a visually quantified object may include a physical interface object(s) 120 that quantifies a parameter, such as a direction, a numerical value, etc.
- Visually quantified objects may include command regions 402 coupled with quantifier regions 404 .
- visually quantified objects may also include command regions 402 that are generally coupled with quantifier regions 404 , but are set to a default numerical value (such as “1”) when no quantifier region 404 is coupled to the command region 402 .
- Visually unquantified objects may, in some cases not explicitly quantify parameters, or may quantify parameters in a manner that is different from the visually quantified objects.
- Visually unquantified objects may include physical interface object(s) 120 that the detection engine 212 does not expect to be coupled with a quantifier region 404 , such as an execution block 428 , magic tile 422 , and/or if tile 426 as examples.
- the detection engine 212 may process the command region 402 and/or the quantifier region 404 for each visually quantified object and identify corresponding commands.
- the corresponding commands may include commands related to specific command regions 402 and multipliers of the command related to quantities detected in the quantifier region 404 .
- the detection engine 212 may use a specific set of rules to classify the command regions 402 and/or the quantifier regions 404 as described elsewhere herein.
- the detection engine 212 may further identify corresponding commands for each visually unquantified object, such as if/then commands for repeat tiles, magic commands for magic tiles, and/or detecting states for the execution block.
- the detection engine may be configured to provide the detected commands to the activity application(s) 214 and the activity application(s) 214 may compile the commands into a set of commands that may be executed on the computing device 104 .
- the set of commands may include the specific sequence of the commands and the activity application(s) 214 may execute the sequence of commands in a linear fashion based on the order that the physical interface object(s) 120 were arranged in the physical activity scene 116 .
- the activity application(s) 214 may be configured to detect any errors when compiling the set of commands and provide alerts to the user when the set of commands would not produce a desired result.
- the activity application(s) 214 may cause the virtual scene to present an indication that the set of commands are improper.
- the activity application(s) 214 may provide prompts and suggestions in response to the set of commands being improper. The prompts and/or suggestions may be based on other user's history on a specific level, machine learning of appropriate responses, etc.
- FIG. 7 is a flowchart of an example method for virtualized tangible programming.
- the activity application(s) 214 may cause the display 112 to present a virtual environment.
- the activity application(s) 214 may cause the display to present a target virtual object within the virtual environment, the virtual environment may be an environment displayed in at least a portion of the virtual scene.
- the virtual environment may include a forest setting displayed on a graphical user interface, and the target virtual object 122 may be a virtual character in the forest setting.
- the activity application(s) 214 may determine an initial state of the target virtual object 122 in the virtual environment of the user interface.
- the initial state may be related to a specific location within the virtual environment, it may be an initial objective, a level, etc.
- the target virtual object 122 may be present in the center of the display 112 and the goal of the target virtual object 122 may be to interact with an additional virtual object 124 also displayed in the virtual environment.
- the video capture device 110 may capture an image of the physical activity surface 116 .
- the physical activity surface may include an arrangement of physical interface object(s) 120 .
- the video capture device 110 may capture multiple images of the physical activity surface 116 over a period of time to capture changes in the arrangement of the physical interface object(s) 120 .
- the detection engine 212 may receive the image from the video capture device 110 and process the image to detect the physical interface object(s) 120 in specific orientations. For example, the detection engine 212 may identify physical interface object(s) 120 that a user has arranged into a sequence. In further implementations, the detection engine 212 may be configured to ignore objects present in the physical activity scene 116 that are not oriented into a specific orientation. For example, if a user creates a sequence of physical interface object(s) 120 and pushes additional physical interface object(s) 120 to the side that were not used to create the sequence, then the detection engine 212 may ignore the additional physical interface object(s) 120 even though they are detectable and recognized within the physical activity scene 116 .
- the detection engine 212 may compare the physical interface object(s) 120 in the specific orientation to a predefined set of instructions.
- the predefined set of instructions may include commands related to the virtual scene represented by each of the physical interface object(s) present within the sequence.
- the predefined set of instructions may only relate to specific physical interface object(s) 120 present within the sequence, while other physical interface object(s) 120 do not include instruction sets.
- the instructions sets may include determining which physical interface object(s) 120 are visually quantified objects and which are visually unquantified objects.
- the predefined set of instructions may be built. Building the instruction set includes generating one or more clusters of physical interface object(s) 120 based on relative positions and/or relative orientations of the objects and determining a sequence for the commands of the instructions based on the clusters.
- the activity application(s) 214 may determine a command represented by the physical interface object(s) 120 in a specific orientation based on the comparison.
- determining a command may include identifying command regions and quantifier regions of specific physical interface object(s) 120 , while in further implementations, alternative ways of determining commands may be used based on how the set of commands are defined.
- the activity application(s) 214 may determine a path through the virtual environment for the target virtual object 122 based on the command.
- the determined path may be based on a set of rules and may include a prediction of what will happen when the command is executed in the virtual environment.
- the determined path may be the effect of a sequence of physical interface object(s) 120 prior to formal execution. For example, if the commands cause the target virtual object to move two blocks right and down one block to access a strawberry (additional virtual object 124 ) then the activity application(s) 214 may determine a path based on the commands causing the target virtual object 122 to perform these actions.
- the activity application(s) 214 may cause the display 122 to present a path projection within the virtual scene 118 in the user interface for presentation to the user.
- the path projection may be a visual indication of the effects of the command, such as highlighting a block the command would cause the target virtual object 122 to move.
- the activity application(s) 214 may cause an additional virtual object 124 to change colors to signal to the user that the command would cause the target virtual object 122 to interact with the additional virtual object 124 .
- FIGS. 8A-8D are a graphical representation 800 illustrating an example interface for virtualized tangible programming that includes progressive path highlighting.
- the virtual scene 118 includes a target virtual object 122 and includes a path projection 802 a showing a command that would cause the target virtual object 122 to perform an action on the current tile in the path projection 802 a .
- the path projection 802 b has been extended a block representing a command that would cause the target virtual object 122 to move to the tile shown in the path projection 802 b .
- the path projection 802 c has been extended an additional block representing a command to move two tiles.
- the path projection 802 may update as additional physical interface object(s) 120 are added to a sequence and the commands are displayed in the path projection 802 .
- the path projection 802 c may have been displayed based on either an additional command tile being added to the sequence or a command tile receiving a quantifier region to multiply the amount of times the command to move is performed.
- the path projection 802 d has been extended a tile to the right showing the addition of another command in a different direction.
- the path projection 802 shown in this example is merely illustrative, and various path projection based on a sequence of physical interface object(s) 120 are contemplated.
- FIG. 8E is a graphical representation 804 of an example interface for virtualized tangible programming.
- a command detection window 806 is displayed showing the physical interface object(s) 120 detected in a sequence by the detection engine 212 .
- the activity application(s) 214 may display the identified sequence as a way of providing feedback to a user as commands are identified. By displaying the detected sequence in a command detection window 806 , the activity application(s) 214 may signal to a user when a sequence is detected, if there are detection errors, or additional commands for the user to review.
- FIG. 9A is a perspective view of an example programming tile 900 .
- the programming tile 900 includes a first portion 940 and a second portion 941 .
- the first portion 940 includes a command region 902 .
- the command region 902 may include a visual indicator representing a repeat command (e.g., recursive arrows), which may be processed by the system 100 , as discussed elsewhere herein.
- the second portion 941 includes a quantifier region 904 .
- the quantifier region 904 includes a visual indicator representing a numeral (e.g., the number 1 ), which may be processed by the system 100 , as discussed elsewhere herein.
- the first portion 940 may comprise a body having a plurality of surfaces.
- the first portion 940 may include a front surface 942 , a back surface 960 , a first side surface 944 , a second side surface 945 , a third side surface 946 , and a tile coupling portion 952 having one or more sides.
- One or more of the surfaces of the first portion 940 may include components of one or more tile alignment mechanisms.
- the tile alignment mechanism conveniently allows for the alignment of two adjacently situated tiles. In some cases, as two tiles are situated sufficiently close to one another such that the corresponding alignment components comprising the alignment mechanism can engage, the alignment mechanism alliance the two tiles so they engage properly.
- the coupling of the two tiles may be assisted by compatible magnetic components included in the tiles that are configured to magnetically couple as the tiles are adjacently situated such that the alignment components may engage.
- the alignment mechanism can advantageously automatically align the tiles as the tiles become magnetically coupled.
- the front surface 942 may extend from the first side surface 944 to an edge of the tile coupling portion 952 , as well as from the third side surface 946 to the second side surface 945 .
- the front surface 942 may bear and/or incorporate the command region 902 .
- the first side surface 944 may be connected to the back side surface 960 by the first side surface 944 , the second side surface 945 , the third side surface 946 , and/or the one or more sides of the tile coupling portion 952 .
- first side surface 944 , the second side surface 945 , and the third side surface 946 are depicted as being perpendicular to the front surface 942 in the back surface 960 , although it should be understood that the surfaces 942 , 960 , 944 , 945 , and/or 946 may have other forms and/or profiles (e.g., may be rounded, polygonal, have complex shapes, may be partial surfaces and/or include voids, etc.), etc. In some embodiments, the surfaces 944 , 945 , 946 , etc., of the first portion 940 may be contiguous, and collectively form the outer sides of the body.
- the second portion 941 may comprise a body having a plurality of surfaces.
- the second portion 941 may include a front surface 943 , a back surface 961 , a first side surface 948 , a second side surface 947 , a third side surface 949 , and the tile coupling portion 954 having one or more sides.
- FIGS. 9B and 9C are perspective views of the programming tile 900 showing the first portion 940 and the second portion 941 of the programming tile 900 separated. In its separated form, the surfaces of the tile coupling portions 952 and 954 are revealed and visible. As shown, the tile coupling portion 952 may include side surfaces 956 and 957 , and the tile coupling portion 954 include side surfaces 955 and 953 . In some embodiments, the tile coupling portion 952 and the tile coupling portion 954 may be shaped in a way that they can compatibly engage and become aligned. Any suitable mechanism for aligning the portions 940 and 941 of the programming tile 900 is contemplated.
- the tile coupling portion 952 may comprise a protruding portion and the tile coupling portion 954 may comprise a recessed portion.
- the protruding portion may include the surface 957 , which radially extends outwardly from a center point aligned with the side surfaces 956 .
- the recessed portion may include surface 953 that correspondingly radially recesses into the body of the second portion 941 , thus extending inwardly from a center point aligned with the side surfaces 955 .
- the side surfaces 956 and 955 , and the curved surfaces 957 and 953 may be configured to mate with one another. For instance, the surfaces may abut against one another when coupled, as shown in FIGS. 9A, 9D, and 9E .
- the second portion 941 may include one or more magnetic fasteners that are magnetically coupleable to one or more magnetic fasteners included in the first portion 940 .
- this advantageously allows the second portion 941 to be retained with the first portion 940 and resist inadvertent separation between the portions 940 and 941 .
- the compatible magnetic fasteners may be embedded in the side surfaces 955 , 956 , 953 , and/or 957 , such that as the surfaces are sufficiently closely adjacently situated, magnetic fields of the magnetic fasteners may interact in the pieces may bond together (e.g., snap together in some cases).
- the second portion 941 and the first portion 940 may be detachably coupled using additional and/or alternative fasteners, such as engagement and receiving components having predetermined shapes that are configured to snap together, clip together, hook together, or otherwise coupled to one another in a removable fashion.
- first and second portions 940 and 941 are advantageous as it allows the user to conveniently and easily switch out different tiles in order to change up the programming sequence there creating.
- the user can easily switch out the second portion 942 change the counter of a loop command, as shown in FIGS. 9A, 9B, 9C, and 9D , which show quantifier regions 904 having different values (e.g., one, two, three, and four, etc.).
- one or more sides of the programming tile 900 may include one or more components of the stacking mechanism, as described elsewhere herein.
- a bottom side of the programming tile 900 may include a bottom surface collectively comprised of bottom surface 960 and bottom surface 961 of the first and second portions 940 and 941 .
- the bottom surface may include a component 970 of the stacking mechanism that is configured to engage with one or more other compatible components, such that two or more tangible physical objects 120 can be stacked.
- a recess 970 may be formed in the bottom surface.
- the recess may include an inner cavity sidewall 971 and a cavity end/bottom surface 972 .
- the recess may be shaped to receive a compatibly shaped protrusion of another tangible physical object 120 , as discussed elsewhere herein. While in this particular example, the stacking mechanism component is shown as a recess, it should be understood that other suitable options, such as those described with reference to the alignment mechanism, are applicable and encompassed hereby.
- FIGS. 10A and 10B are perspective views of an example programming tile 1000 .
- the programming tile 1000 may include a body having a front surface 1001 , back surface 1010 , I tile engaging side surface 1009 , and alignment side surface 1006 , an end surface 1004 , and a side surface 1002 .
- the front surface 1001 and back surface 1010 are connected via the side surfaces 1009 , 1006 , 1004 , and 1002 in a similar manner to that described with reference to the programming tile 900 , which will not be repeated here for the purpose of brevity.
- the programming tile 1000 may include a command region 902 .
- the command region 902 includes a visual indicator reflecting an if command, as discussed elsewhere herein, although other visual indicators are also possible and contemplated.
- the programming tile 1000 includes a tile coupling portion 1008 .
- the tile coupling portion 1008 is configured to couple with one or more sides of another tangible physical object 120 .
- coupling the programming tile 1000 to another tile allows the user to augment, enhance, add to, etc., an action of the other tile (e.g., based on the command regions 902 of the respective tiles), as discussed elsewhere herein.
- the tile coupling portion 1008 may comprise a recessed surface 1009 that is configured to mate with a corresponding outer surface of an adjacent programming tile, such as surface 948 of the second portion 941 of the programming tile 900 , the surface 1148 of the programming tile 1100 (e.g., see FIG. 11 ), and/or any other suitable tile surfaces of any other applicable tiles, etc.
- FIGS. 11A and 11B are perspective views of an example programming tile 1100 .
- the programming tile 1100 is comprised of a single tile, which includes a front surface 1142 having a command region 902 , side surfaces 944 , 1146 , 1148 , and 1145 , and bottom surface 1160 .
- the front surface 142 is connected to the bottom surface number 1160 via the side surfaces 944 , 1146 , 1148 , and 1145 , in a manner similar to that discussed with reference to programming tile 900 , which will not be repeated here for the purpose of brevity.
- the surface 1146 includes an alignment component 984
- the surface 1145 includes the alignment component 982 , as discussed elsewhere herein.
- a bottom side of the programming tile 1100 includes a component 970 of the stacking mechanism discussed above.
- the stacking component 970 may extend across two or more discrete portions of the programming tile, and this non-limiting example, the stacking component 970 is included in a single portion of the programming tile, to illustrate the flexibility of the design. However, it should be understood that the stacking portion may be included in one or more regions of the programming tile.
- the component 970 may comprise two or more receiving or engaging components (e.g., recesses or protrusions, other fastening components, etc.) configured to interact with two or more corresponding receiving or engaging components of the opposing surface of an adjacently situated programming, such as one on which the programming tile 1100 is being stacked.
- receiving or engaging components e.g., recesses or protrusions, other fastening components, etc.
- FIGS. 12A and 12B are perspective views of an example programming tile 1200 .
- the programming tile 1200 may include a front surface 1241 including a command region 902 , side surfaces 944 , 1245 , 1248 , and 1246 , and bottom surface 1260 .
- the front surface 1241 may include one or more visual indicators 432 (e.g., 432 a , 432 b , etc.), that are mechanically linked to a user-interactable tangible button 1224 , such that when a user presses the button 1224 , the one or more visual indicators are triggered.
- the visual indicators 432 may change their physical appearance, the change of which may be detected by the system 100 , as discussed elsewhere herein.
- the button 1224 may be formed on the plate (e.g. not shown) within the body of the programming tile 1200 , which may comprise a housing of a mechanical assembly that transmits the vertical movement of the button to the components comprising the visual indicators 432 .
- a visual indicator 432 may comprise an aperture 1222 (e.g., 1222 a , 1222 b , etc.) formed in the front surface 1241 of the programming tile 1200 , and a block 1220 (e.g., 1220 a , 1220 b , etc.) that is situated within the aperture 1222 , thus filling the aperture 1222 .
- the button 1224 As the button 1224 is pressed (e.g., by a user pressing the top surface 1228 of the button 1224 , which is coupled to the mechanical assembly via side(s) 1230 of the button) and recedes into the corresponding aperture 1226 , formed in the front surface 1241 , and through which the button extends, the mechanical assembly transmits the movement to the block 1220 and corresponding recedes the block away from the front surface such that the aperture appears empty.
- the state of the aperture (e.g., filled, empty) may be detected by the system 100 . Additionally or alternatively, the state of the button 1224 (e.g., pressed, semi-pressed, fully pressed), may similarly be detected by the system 100 . Detection of such state changes may trigger execution of the program which is embodied by a collection of programming tiles including, in this case, the programming tile 1200 .
- FIGS. 13A-13H are various views of an example programming tile 1300 .
- the programming tile 1300 includes a first portion 1340 and the second portion 941 .
- the second portion 941 is discussed with reference to FIGS. 9A-9E , so a detailed description of the portion 941 will not be repeated here for the purposes of brevity.
- the first portion 1340 includes a front surface 1342 , the side surface 944 , the side surface 945 , a side surface 1346 , an incrementing portion 1310 having one or more side surfaces, and a bottom surface 1360 .
- the front surface 1342 is connected to the back surface 1316 via the side surfaces 944 , 945 , 1346 , and/or surface(s) of the incrementing portion 1310 .
- the side surface 1346 includes the alignment component 984
- the side surface 945 includes the alignment component 982 , as discussed elsewhere herein.
- the front surface 1342 includes a command region 902 .
- the back surface 1360 of the programming tile 1300 may include a stacking component, such as component 970 .
- FIGS. 13C and 13D are profile views of the programming tile 1300 , showing the incrementing portion 1310 .
- the incrementing portion 1310 protrudes outwardly from the front surface 1342 .
- the incrementing portion 1310 includes one or more sides extending perpendicularly outwardly from the front surface 1342 to a top surface 1314 of the incrementing portion 1310 .
- the incrementing portion 1310 comprises a dial that is turnable by the user to adjust a parameter (e.g., the parameter region 906 (e.g., directional region 406 )) associated with the command of the command region 902 , as discussed elsewhere herein.
- a parameter e.g., the parameter region 906 (e.g., directional region 406 )
- the dial may be turned by the user to position the visual indicator (e.g., an arrow in this case) included on the top surface 1314 differently.
- the incrementing portion 1310 may include a base portion 1316 which includes a recess in which the turnable portion 1311 of the incrementing portion 1310 is inserted and in which the turnable portion 1311 rotates.
- the base portion 1316 may include a bowl like cavity into which the turnable portion 1311 is inserted.
- the cavity, and some implementations, may emulate a chase, and the turnable portion 1311 may rotate the ball bearings included along the perimeter of the chase, as one in the art would understand.
- the turnable portion 1311 may include snapping fasteners configured to snap to corresponding snapping fasteners included in the base portion 1316 to retain the turnable portion 1311 in place.
- An outwardly facing portion of the turnable portion 1311 and the base portion 1316 may comprise the tile coupling portion 952 , such that the first portion 1340 may couple with other programming tile 1300 portions, such as the second portion 941 , as discussed elsewhere herein.
- This technology yields numerous advantages including, but not limited to, providing a low-cost alternative for developing a nearly limitless range of applications that blend both physical and digital mediums by reusing existing hardware (e.g., camera) and leveraging novel lightweight detection and recognition algorithms, having low implementation costs, being compatible with existing computing device hardware, operating in real-time to provide for a rich, real-time virtual experience, processing numerous (e.g., >15, >25, >35, etc.) physical interface object(s) 120 simultaneously without overwhelming the computing device, recognizing physical interface object(s) 120 with substantially perfect recall and precision (e.g., 99% and 99.5%, respectively), being capable of adapting to lighting changes and wear and imperfections in physical interface object(s) 120 , providing a collaborative tangible experience between users in disparate locations, being intuitive to setup and use even for young users (e.g., 3+ years old), being natural and intuitive to use, and requiring few or no constraints on the types of physical interface object(s) 120 that can be processed.
- existing hardware e.g., camera
- various implementations may be presented herein in terms of algorithms and symbolic representations of operations on data bits within a computer memory.
- An algorithm is here, and generally, conceived to be a self-consistent set of operations leading to a desired result.
- the operations are those requiring physical manipulations of physical quantities.
- these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
- Various implementations described herein may relate to an apparatus for performing the operations herein.
- This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer.
- a computer program may be stored in a computer readable storage medium, including, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, flash memories including USB keys with non-volatile memory or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.
- the technology described herein can take the form of a hardware implementation, a software implementation, or implementations containing both hardware and software elements.
- the technology may be implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
- the technology can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system.
- a computer-usable or computer readable medium can be any non-transitory storage apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
- a data processing system suitable for storing and/or executing program code may include at least one processor coupled directly or indirectly to memory elements through a system bus.
- the memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories that provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
- I/O devices including but not limited to keyboards, displays, pointing devices, etc.
- I/O controllers can be coupled to the system either directly or through intervening I/O controllers.
- Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems, storage devices, remote printers, etc., through intervening private and/or public networks.
- Wireless (e.g., Wi-Fi′) transceivers, Ethernet adapters, and modems, are just a few examples of network adapters.
- the private and public networks may have any number of configurations and/or topologies. Data may be transmitted between these devices via the networks using a variety of different communication protocols including, for example, various Internet layer, transport layer, or application layer protocols.
- data may be transmitted via the networks using transmission control protocol/Internet protocol (TCP/IP), user datagram protocol (UDP), transmission control protocol (TCP), hypertext transfer protocol (HTTP), secure hypertext transfer protocol (HTTPS), dynamic adaptive streaming over HTTP (DASH), real-time streaming protocol (RTSP), real-time transport protocol (RTP) and the real-time transport control protocol (RTCP), voice over Internet protocol (VOIP), file transfer protocol (FTP), WebSocket (WS), wireless access protocol (WAP), various messaging protocols (SMS, MMS, XMS, IMAP, SMTP, POP, WebDAV, etc.), or other known protocols.
- TCP/IP transmission control protocol/Internet protocol
- UDP user datagram protocol
- TCP transmission control protocol
- HTTP hypertext transfer protocol
- HTTPS secure hypertext transfer protocol
- DASH dynamic adaptive streaming over HTTP
- RTSP real-time streaming protocol
- RTCP real-time transport protocol
- RTCP real-time transport control protocol
- VOIP voice over Internet protocol
- FTP file
- modules, routines, features, attributes, methodologies and other aspects of the disclosure can be implemented as software, hardware, firmware, or any combination of the foregoing.
- an element, an example of which is a module, of the specification is implemented as software, the element can be implemented as a standalone program, as part of a larger program, as a plurality of separate programs, as a statically or dynamically linked library, as a kernel loadable module, as a device driver, and/or in every and any other way known now or in the future.
- the disclosure is in no way limited to implementation in any specific programming language, or for any specific operating system or environment. Accordingly, the disclosure is intended to be illustrative, but not limiting, of the scope of the subject matter set forth in the following claims.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Educational Technology (AREA)
- Educational Administration (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Entrepreneurship & Innovation (AREA)
- Automation & Control Theory (AREA)
- Computer Vision & Pattern Recognition (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
- This application is a divisional of Non-Provisional application Ser. No. 15/604,620, entitled “Virtualized Tangible Programming”, filed on May 24, 2017, which claims the benefit under 35 U.S.C. § 119(e) of U.S. Provisional Application No. 62/341,041, entitled “Virtualized Tangible Programming”, filed on May 24, 2016, the entire contents of which are incorporated herein by reference.
- The present disclosure relates to virtualized tangible programming.
- A tangible user interface is a physical environment that a user can physically interact with to manipulate digital information. While tangible user interfaces have opened up a new range of possibilities for interacting with digital information, significant challenges remain when implementing such an interface. For instance, existing tangible user interfaces generally require expensive, high-quality sensors to digitize user interactions with this environment, which results in systems incorporating these tangible user interfaces being too expensive for most consumers. In addition, these existing systems are often difficult to setup and use, which has led to limited customer use and adoption.
- Additionally, there is growing momentum for supporting computational literacy activities throughout K12 education, starting at the earliest grade levels. However, one of the greatest challenges facing the adoption of computational literacy programs, such as developmentally appropriate technology in classrooms, is that stakeholders (e.g., teachers) must feel comfortable and confident with the materials. This includes making sure that technology is accessible to and understandable by stakeholders. The technology should also meet other objectives, such as align with a pedagogical philosophy, such as one of early childhood educators that emphasizes rich sensory-motor experiences, open-ended exploration, and social interaction.
- However, while some solutions have been developed to teach computational literacy (e.g., programming to children), these solutions have had limited success, often due to their complexity or cost. For instance, some existing tangible programming systems that rely on computer vision require use of dedicated hardware (e.g., an overhead camera fixture, an interactive surface with built-in camera hardware, or other bulky, complicated, cumbersome, and/or expensive specialized equipment). These solutions often require specialized training to setup, configure, and customize the experience to the abilities of an (potentially diverse) audience, which deters adoption.
- The technology described herein addresses the deficiencies of other solutions by providing a flexible, portable, highly-responsive, and practical, tangible programming platform.
- According to one innovative aspect of the subject matter in this disclosure, a system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions. One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions. One general aspect includes a computer-implemented method including: detecting objects in image data; performing comparisons between each of the objects and a predefined set of object definitions; recognizing each of the objects as a visually quantified object or a visually unquantified object based on the comparisons; for each of the objects that is recognized as a visually quantified object, processing a command region and a quantifier region from the object, identifying a corresponding command for the object based on a particular visual attribute of the command region, and identifying a quantifier for the command based on a particular visual attribute of the quantifier region; for each of the objects that is recognized as a visually unquantified object, identifying a corresponding command for the object based on a particular visual attribute of the object; and executing, using a computer processor, a set of commands including the corresponding command for each of the objects detected in the image data. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
- Implementations may include one or more of the following features. The computer-implemented method further including: displaying a virtual environment in a user interface, the virtual environment including a target virtual object; and capturing the image data, which depicts a sequence of physical interface objects arranged in a physical environment, where detecting the objects in the image data includes detecting representations of the physical interface objects forming the sequence. The computer-implemented method further including: displaying a virtual environment in a user interface, the virtual environment including a target virtual object; visually manipulating the target virtual object in the virtual environment responsive to executing the set of commands. The computer-implemented method where executing the set of commands includes. The computer-implemented method may also include building an instruction set including the corresponding command for each of the objects detected in the image data. The computer-implemented method may also include executing the instruction set using the computer processor. The computer-implemented method where building the instruction set includes. The computer-implemented method may also include generating one or more clusters of the objects based on relative positions and relative orientations of the objects. The computer-implemented method may also include determining a sequence for the commands of the instruction set based on the one or more clusters. The computer-implemented method further including: determining that a candidate object is missing from a candidate location in the one or more clusters based on the relative positions and relative orientations of the objects; and injecting, into the instruction set, a command corresponding to the candidate object at a position corresponding to the candidate location. The computer-implemented method where, for each of the objects that is recognized as a visually unquantified object, identifying the corresponding command for the object based on the particular visual attribute of the object includes: identifying an end object for a sequence of the objects detected from the image data; and determining a physical state of the end object from the image data, where executing the set of commands includes determining to execute based on the physical state of the end object detected from the image data. The computer-implemented method where a physical object associated with the end object depicted by the image data includes a user-pressable button that changes an aspect of the physical object from a first state to a second state in which the user-pressable button is in a pressed state that is visually perceptible, the image data depicts the end object in the second state, and determining the physical state of the end object includes using blob detection and machine learning to determine the physical state of the end object is a pressed state. The computer-implemented method where the end object includes a physical state including one of a pressed state, an unpressed state, a semi-pressed state, and a rubbish state that is indeterminable. The computer-implemented method where recognizing each of the objects as a visually quantified object includes performing blob detection to detect a directional region of at least one object of the objects as including a directional indicator, and processing the command region and the quantifier region includes dividing the object into the action region and the quantifier region based on the directional region. The computer-implemented method where, the directional indicator is pointed one or more of up, down, left, and right. The computer-implemented method where, the particular visual attribute of the command region includes a predetermined color or graphic, and the particular visual attribute of the quantifier region includes a number. The computer-implemented method where executing the instruction set further includes: displaying a virtual environment in a user interface, the virtual environment including a target virtual object; determining a path of the target virtual object through a portion of the virtual environment based on the instruction set; and displaying a path projection of the path to a user. The computer-implemented method where the command region includes an action region and a direction region and where identifying the quantified command based on the visual attributes of the command region further includes: identifying an action command based on visual attributes of the action region; and identifying a direction command based on visual attributes of the direction region. The computer-implemented method where the specific command includes one of a jump command, a move command, and an action command. The computer-implemented method where executing the specific command based on the quantifier further includes: repeating for an amount of the quantifier, the executing of one of the jump command, the move command, and the action command. The computer-implemented method where executing the specific command based on the quantifier to manipulate the virtual object includes presenting the virtual object moving through a virtual environment based on the specific command. The computer-implemented method further including: generating a new virtual object for presentation on the display device based on the specific command. The physical interface object further including: a pressable button situated on the top surface, where the pressable button may be interacted with to transition the pressable button between states when the pressable button is depressed. The physical interface object where the visual aspects are configured to alter their appearance in response to a pressable button being depressed. The physical interface object where the compatible physical interface object includes a second top surface including one or more second visual aspects. The physical interface object further including: a dial coupled to the top surface, the dial including one or more visual directional indicator aspects. The physical interface object where the dial is a rotatable dial that can be rotated horizontally to the position of the top surface. The physical interface object where the physical interface object includes a command region and the compatible physical interface object includes a quantifier region such that when the physical interface object is coupled with the compatible physical interface object a visually quantified object is formed. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium.
- One general aspect includes the computer-implemented method where the candidate object is one of an end object, an event object, and an action object missing from the image data. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
- One general aspect includes a computer-implemented method including: detecting an object from image data; recognizing the object as a numerically quantified object based on a predetermined visual characteristic; processing the recognized object into a command region and a quantifier region; identifying a specific command for manipulating, based on a visual attribute of the command region, a virtual object rendered for display in a virtual environment displayed on a display of the competing device; identifying a quantifier for the specific command based on a visual attribute of the quantifier region; and executing, using a processor of the computing device, the specific command based on the quantifier to manipulate the virtual object. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
- Implementations may include one or more of the following features. The computer-implemented method where the specific command includes one of a jump command, a move command, and an action command. The computer-implemented method where executing the specific command based on the quantifier further includes: repeating for an amount of the quantifier, the executing of one of the jump command, the move command, and the action command. The computer-implemented method where executing the specific command based on the quantifier to manipulate the virtual object includes presenting the virtual object moving through a virtual environment based on the specific command. The computer-implemented method further including: generating a new virtual object for presentation on the display device based on the specific command. The physical interface object further including: a pressable button situated on the top surface, where the pressable button may be interacted with to transition the pressable button between states when the pressable button is depressed. The physical interface object where the visual aspects are configured to alter their appearance in response to a pressable button being depressed. The physical interface object where the compatible physical interface object includes a second top surface including one or more second visual aspects. The physical interface object further including: a dial coupled to the top surface, the dial including one or more visual directional indicator aspects. The physical interface object where the dial is a rotatable dial that can be rotated horizontally to the position of the top surface. The physical interface object where the physical interface object includes a command region and the compatible physical interface object includes a quantifier region such that when the physical interface object is coupled with the compatible physical interface object a visually quantified object is formed. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium.
- One general aspect includes a computer-implemented method including: presenting a user interface including a virtual environment and a target object, determining an initial state of the target object in the virtual environment of the user interface, capturing an image of a physical activity surface, processing the image to detect two or more physical interface objects in a specific orientation, comparing the physical interface objects in the specific orientation to a predefined set of instructions, determining a command represented by the physical interface objects in the specific orientation based on the comparison, determining a path through the virtual environment for the target object using the command, and displaying a path projection in the user interface along the path for presentation to a user. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
- One general aspect includes a computer-implemented method including: receiving, from a video capture device, a video stream that includes a physical activity scene of a physical activity surface, proximate to a display device, and one or more physical interface objects placed on the physical activity scene and physically interactable with by a user; processing, using one or more computing devices, the video stream to detect the one or more physical interface objects included in the physical activity scene; recognizing each of the physical interface objects as a visually quantified object or a visually unquantified object based on the comparisons; for each of the physical interface objects that is recognized as a visually quantified object, processing a command region and a quantifier region from the object, identifying a corresponding command for the physical interface object based on a particular visual attribute of the command region, and identifying a quantifier for the command based on a particular visual attribute of the quantifier region; for each of the physical interface objects that is recognized as a visually unquantified object, identifying a corresponding command for the physical interface object based on a particular visual attribute of the object; and executing, using the one or more computing devices, a set of commands including the corresponding command for each of the objects detected in the image data to present virtual information on the display device. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
- One general aspect includes a visual tangible programming system including: a video capture device coupled for communication with a computing device, the video capture device being adapted to capture a video stream that includes a physical activity scene adjacent to the computing device; a detector, coupled to the computing device, the detector being adapted to detect within the video stream a sequence of physical interface objects in the physical activity scene; a processor of the computing device, the processor being adapted to compare the sequence of physical interface objects to a predefined set of object definitions and recognize visually quantified objects and visually unquantified objects based on the comparison, and execute a set of commands based the visually quantified objects and visually unquantified objects; and a display coupled to the computing device, the display being adapted to display an interface that includes a virtual scene and update the virtual scene based on the executed set of commands. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
- One general aspect includes a physical interface object for constructing a computer program in a physical space including: a housing including a top surface, a lower side surface, an upper side surface, a left side surface, a right side surface, and a bottom surface; the top surface including one or more visual aspects; one or more of the lower side surface, the upper side surface, the left side surface, and the right side surface including one or more magnetic fasteners configured to couple to a corresponding side surface of a compatible physical interface object; lower side surface, the upper side surface, the left side surface, and the right side surface including an alignment mechanism for coupling to a compatible alignment mechanism of a compatible physical interface object. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
- Implementations may include one or more of the following features. The physical interface object further including: a pressable button situated on the top surface, where the pressable button may be interacted with to transition the pressable button between states when the pressable button is depressed. The physical interface object where the visual aspects are configured to alter their appearance in response to a pressable button being depressed. The physical interface object where the compatible physical interface object includes a second top surface including one or more second visual aspects. The physical interface object further including: a dial coupled to the top surface, the dial including one or more visual directional indicator aspects. The physical interface object where the dial is a rotatable dial that can be rotated horizontally to the position of the top surface. The physical interface object where the physical interface object includes a command region and the compatible physical interface object includes a quantifier region such that when the physical interface object is coupled with the compatible physical interface object a visually quantified object is formed. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium.
- Other implementations of one or more of these aspects and other aspects described in this document include corresponding systems, apparatus, and computer programs, configured to perform the actions of the methods, encoded on computer storage devices. The above and other implementations are advantageous in a number of respects as articulated through this document. Moreover, it should be understood that the language used in the present disclosure has been principally selected for readability and instructional purposes, and not to limit the scope of the subject matter disclosed herein.
- The disclosure is illustrated by way of example, and not by way of limitation in the figures of the accompanying drawings in which like reference numerals are used to refer to similar elements.
-
FIG. 1 is a graphical representation illustrating an example configuration for virtualized tangible programming. -
FIG. 2 is a block diagram illustrating an example computer system for virtualized tangible programming. -
FIG. 3 is a block diagram illustrating an example computing device. -
FIGS. 4A-4D are graphical representations illustrating example physical interface objects. -
FIG. 5 is a graphical representation illustrating an example sequence of physical interface objects. -
FIG. 6 is a flowchart of an example method for virtualized tangible programming. -
FIG. 7 is a flowchart of an example method for virtualized tangible programming. -
FIGS. 8A-8E are graphical representations illustrating example interfaces for virtualized tangible programming. -
FIGS. 9A-13H portray various views of example physical interface objects. - The technology described herein provides a platform for a real time, tangible programming environment. The programming environment is intuitive and allows users to understand how to construct programs without prior training. For example, a user may create a sequence of physical interface objects and cause a virtual scene to change based on executed commands that correspond to the sequence of physical interface objects.
-
FIG. 1 is a graphical representation illustrating anexample configuration 100 of a system for virtualized tangible programming. Theconfiguration 100 may be used for various activities in thephysical activity scene 116. As depicted, theconfiguration 100 includes, in part, a tangible,physical activity surface 102 on which physical interface object(s) 120 may be placed (e.g., drawn, created, molded, built, projected, etc.) and acomputing device 104 that is equipped or otherwise coupled to avideo capture device 110 configured to capture video of theactivity surface 102. The physical interface object(s) 120 may be arranged in thephysical activity scene 116 in a collection, which may form a computer program (e.g., a sequence of programming instructions/commands). The various components that make up the physical interface object(s) 120 may be coupled (e.g., mated, aligned, slid in and out) together in different combinations and/or or physically manipulated (e.g., rotated, pressed, switched, etc.). For example, verb tiles may be joined with other verb tiles, unit tiles and/or adverb tiles may be joined with verb tiles (e.g., like puzzle pieces.), directional dials may be rotated, etc. Thecomputing device 104 includes novel software and/or hardware capable of executing commands to manipulate a targetvirtual object 122 based on the physical interface object(s) 120. - While the
activity surface 102 is depicted as substantially horizontal inFIG. 1 , it should be understood that theactivity surface 102 can be vertical or positioned at any other angle suitable to the user for interaction. Theactivity surface 102 can have any color, pattern, texture, and topography. For instance, theactivity surface 102 can be substantially flat or be disjointed/discontinuous in nature. Non-limiting examples of anactivity surface 102 include a table, desk, counter, ground, a wall, a whiteboard, a chalkboard, a customized surface, etc. Theactivity surface 102 may additionally or alternatively include a medium on which the user may render physical interface object(s) 120, such as paper, canvas, fabric, clay, foam, or other suitable medium. - In some implementations, the
activity surface 102 may be preconfigured for certain activities. As depicted inFIG. 1 , an example configuration may include anactivity surface 102 that includes aphysical activity scene 116, such as a whiteboard or drawing board. Thephysical activity scene 116 may be integrated with thestand 106 or may be distinct from thestand 106 but placeable adjacent to thestand 106. Thephysical activity scene 116 can indicate to the user the boundaries of theactivity surface 102 that is within the field of view of thevideo capture device 110. In some implementations, thephysical activity scene 116 may be a board, such as a chalkboard or whiteboard, separate from theactivity surface 102. - In some instances, the size of the interactive area on the
physical activity scene 116 may be bounded by the field of view of thevideo capture device 110 and can be adapted by anadapter 108 and/or by adjusting the position of thevideo capture device 110. In additional examples, thephysical activity scene 116 may be a light projection (e.g., pattern, context, shapes, etc.) projected onto theactivity surface 102. - The
computing device 104 included in theexample configuration 100 may be situated on thephysical activity surface 102 or otherwise proximate to thephysical activity surface 102. Thecomputing device 104 can provide the user(s) with a virtual portal for viewing avirtual scene 118. For example, thecomputing device 104 may be placed on a table in front of a user so the user can easily see thecomputing device 104 while interacting with physical interface object(s) 120 on thephysical activity surface 102.Example computing devices 104 may include, but are not limited to, mobile phones (e.g., feature phones, smart phones, etc.), tablets, laptops, desktops, netbooks, TVs, set-top boxes, media streaming devices, portable media players, navigation devices, personal digital assistants, etc. - The
computing device 104 includes or is otherwise coupled (e.g., via a wireless or wired connection) to a video capture device 110 (also referred to herein as a camera) for capturing a video stream of theactivity surface 102. As depicted inFIG. 1 thevideo capture device 110 may be a front-facing camera that is equipped with anadapter 108 that adapts the field of view of thecamera 110 to include, at least in part, theactivity surface 102. For clarity, thephysical activity scene 116 of theactivity surface 102 captured by thevideo capture device 110 is also interchangeably referred to herein as the activity surface or the activity scene in some implementations. - As depicted in
FIG. 1 , thecomputing device 104 and/or thevideo capture device 110 may be positioned and/or supported by astand 106. For instance, thestand 106 may position thedisplay 112 of thevideo capture device 110 in a position that is optimal for viewing and interaction by the user who is simultaneously interacting with the physical environment (physical activity scene 116). Thestand 106 may be configured to rest on theactivity surface 102 and receive and sturdily hold thecomputing device 104 so thecomputing device 104 remains still during use. - In some implementations, the
adapter 108 adapts a video capture device 110 (e.g., front-facing, rear-facing camera) of thecomputing device 104 to capture substantially only thephysical activity scene 116, although numerous further implementations are also possible and contemplated. For instance, thecamera adapter 108 can split the field of view of the front-facing camera into two scenes. In this example with two scenes, thevideo capture device 110 captures aphysical activity scene 116 that includes a portion of theactivity surface 102 and is able to capture physical interface object(s) 120 in either portion of thephysical activity scene 116. In another example, thecamera adapter 108 can redirect a rear-facing camera of the computing device (not shown) toward a front-side of thecomputing device 104 to capture thephysical activity scene 116 of theactivity surface 102 located in front of thecomputing device 104. In some implementations, theadapter 108 can define one or more sides of the scene being captured (e.g., top, left, right, with bottom open). - The
adapter 108 and stand 106 for acomputing device 104 may include a slot for retaining (e.g., receiving, securing, gripping, etc.) an edge of thecomputing device 104 to cover at least a portion of thecamera 110. Theadapter 108 may include at least one optical element (e.g., a mirror) to direct the field of view of thecamera 110 toward theactivity surface 102. Thecomputing device 104 may be placed in and received by a compatibly sized slot formed in a top side of thestand 106. The slot may extend at least partially downward into a main body of thestand 106 at an angle so that when thecomputing device 104 is secured in the slot, it is angled back for convenient viewing and utilization by its user or users. Thestand 106 may include a channel formed perpendicular to and intersecting with the slot. The channel may be configured to receive and secure theadapter 108 when not in use. For example, theadapter 108 may have a tapered shape that is compatible with and configured to be easily placeable in the channel of thestand 106. In some instances, the channel may magnetically secure theadapter 108 in place to prevent theadapter 108 from being easily jarred out of the channel. Thestand 106 may be elongated along a horizontal axis to prevent thecomputing device 104 from tipping over when resting on a substantially horizontal activity surface (e.g., a table). Thestand 106 may include channeling for a cable that plugs into thecomputing device 104. The cable may be configured to provide power to thecomputing device 104 and/or may serve as a communication link to other computing devices, such as a laptop or other personal computer. - In some implementations, the
adapter 108 may include one or more optical elements, such as mirrors and/or lenses, to adapt the standard field of view of thevideo capture device 110. For instance, theadapter 108 may include one or more mirrors and lenses to redirect and/or modify the light being reflected fromactivity surface 102 into thevideo capture device 110. As an example, theadapter 108 may include a mirror angled to redirect the light reflected from theactivity surface 102 in front of thecomputing device 104 into a front-facing camera of thecomputing device 104. As a further example, many wireless handheld devices include a front-facing camera with a fixed line of sight with respect to thedisplay 112 including avirtual scene 118. Theadapter 108 can be detachably connected to the device over thecamera 110 to augment the line of sight of thecamera 110 so it can capture the activity surface 102 (e.g., surface of a table). The mirrors and/or lenses in some implementations can be polished or laser quality glass. In other examples, the mirrors and/or lenses may include a first surface that is a reflective element. The first surface can be a coating/thin film capable of redirecting light without having to pass through the glass of a mirror and/or lens. In an alternative example, a first surface of the mirrors and/or lenses may be a coating/thin film and a second surface may be a reflective element. In this example, the lights passes through the coating twice, however since the coating is extremely thin relative to the glass, the distortive effect is reduced in comparison to a conventional mirror. This reduces the distortive effect of a conventional mirror in a cost effective way. - In another example, the
adapter 108 may include a series of optical elements (e.g., mirrors) that wrap light reflected off of theactivity surface 102 located in front of thecomputing device 104 into a rear-facing camera of thecomputing device 104 so it can be captured. Theadapter 108 could also adapt a portion of the field of view of the video capture device 110 (e.g., the front-facing camera) and leave a remaining portion of the field of view unaltered so that multiple scenes may be captured by thevideo capture device 110 as shown inFIG. 1 . Theadapter 108 could also include optical element(s) that are configured to provide different effects, such as enabling thevideo capture device 110 to capture a greater portion of theactivity surface 102. For example, theadapter 108 may include a convex mirror that provides a fisheye effect to capture a larger portion of theactivity surface 102 than would otherwise be capturable by a standard configuration of thevideo capture device 110. - The
video capture device 110 could, in some implementations, be an independent unit that is distinct from thecomputing device 104 and may be positionable to capture theactivity surface 102 or may be adapted by theadapter 108 to capture theactivity surface 102 as discussed above. In these implementations, thevideo capture device 110 may be communicatively coupled via a wired or wireless connection to thecomputing device 104 to provide it with the video stream being captured. - The physical interface object(s) 120 in some implementations may be tangible objects that a user may interact with in the
physical activity scene 116. For example, the physical interface object(s) 120 in some implementations may be programming blocks that depict various programming actions and functions. A user may arrange a sequence of the programming blocks representing different actions and functions on thephysical activity scene 116 and thecomputing device 104 may process the sequence to determine a series of commands to execute in thevirtual scene 118. - The
virtual scene 118 in some implementations may be a graphical interface displayed on a display of thecomputing device 104. Thevirtual scene 118 may be setup to display prompts and actions to a user to assist in organizing the physical interface object(s) 120. For example, in some implementations, the virtual scene may include a targetvirtual object 122, depicted inFIG. 1 as an animated character. The user may create a series of commands using the physical interface object(s) 120 to control various actions of the targetvirtual object 122, such as making the targetvirtual object 122 move around thevirtual scene 118, interact with an additionalvirtual object 124, perform a repeated action, etc. -
FIG. 2 is a block diagram illustrating anexample computer system 200 for virtualized tangible programming. The illustratedsystem 200 includescomputing devices 104 a . . . 104 n (also referred to individually and collectively as 104) andservers 202 a . . . 202 n (also referred to individually and collectively as 202), which are communicatively coupled via anetwork 206 for interaction with one another. For example, thecomputing devices 104 a . . . 104 n may be respectively coupled to thenetwork 206 viasignal lines 208 a . . . 208 n and may be accessed by users 222 a . . . 222 n (also referred to individually and collectively as 222). Theservers 202 a . . . 202 n may be coupled to thenetwork 206 viasignal lines 204 a . . . 204 n, respectively. The use of the nomenclature “a” and “n” in the reference numbers indicates that any number of those elements having that nomenclature may be included in thesystem 200. - The
network 206 may include any number of networks and/or network types. For example, thenetwork 206 may include, but is not limited to, one or more local area networks (LANs), wide area networks (WANs) (e.g., the Internet), virtual private networks (VPNs), mobile (cellular) networks, wireless wide area network (WWANs), WiMAX® networks, Bluetooth® communication networks, peer-to-peer networks, other interconnected data paths across which multiple devices may communicate, various combinations thereof, etc. - The
computing devices 104 a . . . 104 n (also referred to individually and collectively as 104) are computing devices having data processing and communication capabilities. For instance, acomputing device 104 may include a processor (e.g., virtual, physical, etc.), a memory, a power source, a network interface, and/or other software and/or hardware components, such as front and/or rear facing cameras, display, graphics processor, wireless transceivers, keyboard, camera, sensors, firmware, operating systems, drivers, various physical connection interfaces (e.g., USB, HDMI, etc.). Thecomputing devices 104 a . . . 104 n may couple to and communicate with one another and the other entities of thesystem 200 via thenetwork 206 using a wireless and/or wired connection. While two ormore computing devices 104 are depicted inFIG. 2 , thesystem 200 may include any number ofcomputing devices 104. In addition, thecomputing devices 104 a . . . 104 n may be the same or different types of computing devices. - As depicted in
FIG. 2 , one or more of thecomputing devices 104 a . . . 104 n may include acamera 110, adetection engine 212, and activity application(s) 214. One or more of thecomputing devices 104 and/orcameras 110 may also be equipped with anadapter 108 as discussed elsewhere herein. Thedetection engine 212 is capable of detecting and/or recognizing physical interface object(s) 120 located in/on the physical activity scene 116 (e.g., on theactivity surface 102 within field of view of camera 110). Thedetection engine 212 can detect the position and orientation of the physical interface object(s) 120 in physical space, detect how the physical interface object(s) 120 are being manipulated by the user 222, and cooperate with the activity application(s) 214 to provide users 222 with a rich virtual experience by executing commands in thevirtual scene 118 based on the physical interface object(s) 120. - In some implementations, the
detection engine 212 processes video captured by acamera 110 to detect physical interface object(s) 120. The activity application(s) 214 are capable of executing a series of commands in thevirtual scene 118 based on the detected physical interface object(s) 120. Additional structure and functionality of thecomputing devices 104 are described in further detail below with reference to at leastFIG. 3 . - The servers 202 may each include one or more computing devices having data processing, storing, and communication capabilities. For example, the servers 202 may include one or more hardware servers, server arrays, storage devices and/or systems, etc., and/or may be centralized or distributed/cloud-based. In some implementations, the servers 202 may include one or more virtual servers, which operate in a host server environment and access the physical hardware of the host server including, for example, a processor, memory, storage, network interfaces, etc., via an abstraction layer (e.g., a virtual machine manager).
- The servers 202 may include software applications operable by one or more computer processors of the servers 202 to provide various computing functionalities, services, and/or resources, and to send data to and receive data from the
computing devices 104. For example, the software applications may provide functionality for internet searching; social networking; web-based email; blogging; micro-blogging; photo management; video, music and multimedia hosting, distribution, and sharing; business services; news and media distribution; user account management; or any combination of the foregoing services. It should be understood that the servers 202 are not limited to providing the above-noted services and may include other network-accessible services. - It should be understood that the
system 200 illustrated inFIG. 2 is provided by way of example, and that a variety of different system environments and configurations are contemplated and are within the scope of the present disclosure. For instance, various functionality may be moved from a server to a client, or vice versa and some implementations may include additional or fewer computing devices, services, and/or networks, and may implement various functionality client or server-side. Further, various entities of thesystem 200 may be integrated into a single computing device or system or additional computing devices or systems, etc. -
FIG. 3 is a block diagram of anexample computing device 104. As depicted, thecomputing device 104 may include aprocessor 312,memory 314,communication unit 316,display 112,camera 110, and aninput device 318, which are communicatively coupled by acommunications bus 308. However, it should be understood that thecomputing device 104 is not limited to such and may include other elements, including, for example, those discussed with reference to thecomputing devices 104 inFIGS. 1 and 2 . - The
processor 312 may execute software instructions by performing various input/output, logical, and/or mathematical operations. Theprocessor 312 has various computing architectures to process data signals including, for example, a complex instruction set computer (CISC) architecture, a reduced instruction set computer (RISC) architecture, and/or an architecture implementing a combination of instruction sets. Theprocessor 312 may be physical and/or virtual, and may include a single core or plurality of processing units and/or cores. - The
memory 314 is a non-transitory computer-readable medium that is configured to store and provide access to data to the other elements of thecomputing device 104. In some implementations, thememory 314 may store instructions and/or data that may be executed by theprocessor 312. For example, thememory 314 may store thedetection engine 212, the activity application(s) 214, and thecamera driver 306. Thememory 314 is also capable of storing other instructions and data, including, for example, an operating system, hardware drivers, other software applications, data, etc. Thememory 314 may be coupled to thebus 308 for communication with theprocessor 312 and the other elements of thecomputing device 104. - The
communication unit 316 may include one or more interface devices (I/F) for wired and/or wireless connectivity with thenetwork 206 and/or other devices. In some implementations, thecommunication unit 316 may include transceivers for sending and receiving wireless signals. For instance, thecommunication unit 316 may include radio transceivers for communication with thenetwork 206 and for communication with nearby devices using close-proximity (e.g., Bluetooth®, NFC, etc.) connectivity. In some implementations, thecommunication unit 316 may include ports for wired connectivity with other devices. For example, thecommunication unit 316 may include a CAT-5 interface, Thunderbolt™ interface, FireWire™ interface, USB interface, etc. - The
display 112 may display electronic images and data output by thecomputing device 104 for presentation to a user 222. Thedisplay 112 may include any conventional display device, monitor or screen, including, for example, an organic light-emitting diode (OLED) display, a liquid crystal display (LCD), etc. In some implementations, thedisplay 112 may be a touch-screen display capable of receiving input from one or more fingers of a user 222. For example, thedisplay 112 may be a capacitive touch-screen display capable of detecting and interpreting multiple points of contact with the display surface. In some implementations, thecomputing device 104 may include a graphics adapter (not shown) for rendering and outputting the images and data for presentation ondisplay 112. The graphics adapter (not shown) may be a separate processing device including a separate processor and memory (not shown) or may be integrated with theprocessor 312 andmemory 314. - The
input device 318 may include any device for inputting information into thecomputing device 104. In some implementations, theinput device 318 may include one or more peripheral devices. For example, theinput device 318 may include a keyboard (e.g., a QWERTY keyboard), a pointing device (e.g., a mouse or touchpad), microphone, a camera, etc. In some implementations, theinput device 318 may include a touch-screen display capable of receiving input from the one or more fingers of the user 222. For instance, the functionality of theinput device 318 and thedisplay 112 may be integrated, and a user 222 of thecomputing device 104 may interact with thecomputing device 104 by contacting a surface of thedisplay 112 using one or more fingers. In this example, the user 222 could interact with an emulated (i.e., virtual or soft) keyboard displayed on the touch-screen display 112 by using fingers to contact thedisplay 112 in the keyboard regions. - The
detection engine 212 may include adetector 304. Theelements bus 308 and/or theprocessor 312 to one another and/or theother elements computing device 104. In some implementations, one or more of theelements processor 312 to provide their functionality. In some implementations, one or more of theelements memory 314 of thecomputing device 104 and are accessible and executable by theprocessor 312 to provide their functionality. In any of the foregoing implementations, thesecomponents processor 312 and other elements of thecomputing device 104. - The
detector 304 includes software and/or logic for processing the video stream captured by thecamera 110 to detect physical interface object(s) 120 included in the video stream. In some implementations, thedetector 304 may identify line segments related to physical interface object(s) 120 included in thephysical activity scene 116. In some implementations, thedetector 304 may be coupled to and receive the video stream from thecamera 110, thecamera driver 306, and/or thememory 314. In some implementations, thedetector 304 may process the images of the video stream to determine positional information for the line segments related to the physical interface object(s) 120 in the activity scene 116 (e.g., location and/or orientation of the line segments in 2D or 3D space) and then analyze characteristics of the line segments included in the video stream to determine the identities and/or additional attributes of the line segments. - The
detector 304 may recognize the line by identifying its contours. Thedetector 304 may also identify various attributes of the line, such as colors, contrasting colors, depth, texture, etc. In some implementations, thedetector 304 may use the description of the line and the lines attributes to identify the physical interface object(s) 120 by comparing the description and attributes to a database of objects and identifying the closest matches. - The
detector 304 may be coupled to thestorage 310 via thebus 308 to store, retrieve, and otherwise manipulate data stored therein. For example, thedetector 304 may query thestorage 310 for data matching any line segments that it has determined are present in thephysical activity scene 116. In all of the above descriptions, thedetector 304 may send the detected images to thedetection engine 212 and thedetection engine 212 may perform the above described features. - The
detector 304 may be able to process the video stream to detect sequences of physical interface object(s) 120 on thephysical activity scene 116. In some implementations, thedetector 304 may be configured to understand relational aspects between the physical interface object(s) 120 and determine a sequence, interaction, change, etc. based on the relational aspects. For example, thedetector 304 may be configured to identify an interaction related to one or more physical interface object(s) 120 present in thephysical activity scene 116 and the activity application(s) 214 may execute a series of commands based on the relational aspects between the one or more physical interface object(s) 120 and the interaction. For example, the interaction may be pressing a button incorporated into a physical interface object(s) 120. - The activity application(s) 214 include software and/or logic for receiving a sequence of physical interface object(s) 120 and identifying corresponding commands that can be executed in the
virtual scene 118. The activity application(s) 214 may be coupled to thedetector 304 via theprocessor 312 and/or thebus 308 to receive the detected physical interface object(s) 120. For example, a user 222 may arrange a sequence of physical interface object(s) 120 on thephysical activity scene 116. Thedetection engine 212 may then notify the activity application(s) 214 that a user has pressed an “execution block” in the sequence of the physical interface object(s) 120, causing the activity application(s) 214 to execute a set of commands associated with each of the physical interface object(s) 120 and manipulate the target virtual object 122 (e.g., move, remove, adjust, modify, etc., the targetvirtual object 122 and/or other objects and/or parameters in the virtual scene). - In some implementations, the activity application(s) 214 may determine the set of commands by searching through a database of commands that are compatible with the attributes of the detected physical interface object(s) 120. In some implementations, the activity application(s) 214 may access a database of commands stored in the
storage 310 of thecomputing device 104. In further implementations, the activity application(s) 214 may access a server 202 to search for commands. In some implementations, a user 222 may predefine a set of commands to include in the database of commands. For example, a user 222 can predefine that an interaction with a specific physical interface object 120 included in thephysical activity scene 116 to prompt the activity application(s) 214 to execute a predefined set of commands based on the interaction. - In some implementations, the activity application(s) 214 may enhance the
virtual scene 118 and/or the targetvirtual object 122 as part of the executed set of commands. For example, the activity application(s) 214 may display visual enhancements as part of executing the set of commands. The visual enhancements may include adding color, extra virtualizations, background scenery, etc. In further implementations, the visual enhancements may include having the targetvirtual object 122 move or interact with another virtualization (124) in thevirtual scene 118. - In some instances, the manipulation of the physical interface object(s) 120 by the user 222 in the
physical activity scene 116 may be incrementally presented in thevirtual scene 118 as the user 222 manipulates the physical interface object(s) 120, an example of which is shown inFIG. 9 . Non-limiting examples of the activity applications 214 may include video games, learning applications, assistive applications, storyboard applications, collaborative applications, productivity applications, etc. - The
camera driver 306 includes software storable in thememory 314 and operable by theprocessor 312 to control/operate thecamera 110. For example, thecamera driver 306 is a software driver executable by theprocessor 312 for signaling thecamera 110 to capture and provide a video stream and/or still image, etc. Thecamera driver 306 is capable of controlling various features of the camera 110 (e.g., flash, aperture, exposure, focal length, etc.). Thecamera driver 306 may be communicatively coupled to thecamera 110 and the other components of thecomputing device 104 via thebus 308, and these components may interface with thecamera driver 306 via thebus 308 to capture video and/or still images using thecamera 110. - As discussed elsewhere herein, the
camera 110 is a video capture device configured to capture video of at least theactivity surface 102. Thecamera 110 may be coupled to thebus 308 for communication and interaction with the other elements of thecomputing device 104. Thecamera 110 may include a lens for gathering and focusing light, a photo sensor including pixel regions for capturing the focused light and a processor for generating image data based on signals provided by the pixel regions. The photo sensor may be any type of photo sensor including a charge-coupled device (CCD), a complementary metal-oxide-semiconductor (CMOS) sensor, a hybrid CCD/CMOS device, etc. Thecamera 110 may also include any conventional features such as a flash, a zoom lens, etc. Thecamera 110 may include a microphone (not shown) for capturing sound or may be coupled to a microphone included in another component of thecomputing device 104 and/or coupled directly to thebus 308. In some implementations, the processor of thecamera 110 may be coupled via thebus 308 to store video and/or still image data in thememory 314 and/or provide the video and/or still image data to other elements of thecomputing device 104, such as thedetection engine 212 and/or activity application(s) 214. - The
storage 310 is an information source for storing and providing access to stored data, such as a database of commands, user profile information, community developed commands, virtual enhancements, etc., object data, calibration data, and/or any other information generated, stored, and/or retrieved by the activity application(s) 214. - In some implementations, the
storage 310 may be included in thememory 314 or another storage device coupled to thebus 308. In some implementations, thestorage 310 may be or included in a distributed data store, such as a cloud-based computing and/or data storage system. In some implementations, thestorage 310 may include a database management system (DBMS). For example, the DBMS could be a structured query language (SQL) DBMS. For instance,storage 310 may store data in an object-based data store or multi-dimensional tables comprised of rows and columns, and may manipulate, i.e., insert, query, update, and/or delete, data entries stored in theverification data store 106 using programmatic operations (e.g., SQL queries and statements or a similar database manipulation library). Additional characteristics, structure, acts, and functionality of thestorage 310 is discussed elsewhere herein. -
FIG. 4A is agraphical representation 400 illustrating an example physical interface object 120. In some implementations, the example physical interface object 120 may include two different regions, acommand region 402 and aquantifier region 404. In some implementations, thecommand region 402 and thequantifier region 404 may be different regions of the same (e.g., a single) physical interface object 120, while in further implementations, thecommand region 402 andquantifier region 404 may be separable objects (e.g., tiles (also called blocks)) that can be coupled together to form a coupledcommand region 402 andquantifier region 404. For example, thequantifier regions 404, may represent various numbers and different quantifier regions may be coupled withdifferent command regions 402 to form various programming commands. - The
command region 402 may represent various actions, such as walking, jumping, interacting, etc. Thecommand region 402 may correspond to the set of commands that causes the targetvirtual object 122 to perform the action depicted on thecommand region 402. Thequantifier region 404 may act as a multiplier to thecommand region 402 and may correspond to a multiplying effect for the amount of times the set of commands are executed by the activity application(s) 214, causing the targetvirtual object 122 to perform the action the number of times represented by thequantifier region 404. For example, thecommand region 402 may represent the action to move and thequantifier region 404 may include the quantity “2”, causing the activity application(s) 214 to execute a set of commands causing the targetvirtual object 122 to move two tiles. In some implementations, acommand region 402 that does not include aquantifier region 404 may cause the activity application(s) 214 to execute a set of commands a single time, (or any other default alternative whenquantifier region 404 may not be detected.) - In some implementations, the physical interface object(s) 120 may include a
directional region 406. Thedirectional region 406 may correspond to a set of commands representing a direction for an action represented in thecommand region 402. For example, thedirectional region 406 may be represented as an arrow and the direction of the arrow may represent a corresponding direction for a set of commands. In some implementations, a directional command may be represented by thedirectional region 406. The directional command may be able to point in any direction, including up, down, left, and/or right. In some implementations, thedirectional region 406 may be a dial that a user can rotate to point in different directions. The dial may be integrated into the physical interface object(s) 120 or the dial may be separable and may be configured to couple with the physical interface object(s) 120 to allow a user to rotate the dial. In some implementations, thedirectional region 406 may be rotatable, allowing a user to manipulate thedirectional region 406 to point in a variety of different directions. In some implementations, thedetection engine 212 may be configured to identify thedirection region 406 and use thedirection region 406 to divide the physical interface object(s) 120 into thequantifier region 404 and thecommand region 402. - In some implementations, the physical interface object(s) 120 may be magnetic and may configured to magnetically fasten to adjacent objects. For instance, a given programming tile may include tile magnetic fasteners 408 and/or region magnetic fasteners 410. The tile magnetic fasteners 408 may be present on a top side and/or a bottom side of the physical interface object(s) 120 and allow a physical interface object(s) 120 to magnetically couple with other objects, such as additional physical interface object(s) 120, boundaries of the
physical activity scene 116, etc. In some implementations, the tile magnetic fasteners 408 may magnetically couple with additional tile magnetic fasteners (not shown) on other physical interface object(s) 120. In further implementations, the objects being magnetically coupled with the physical interface object(s) 120 may include a ferromagnetic material that magnetically couples with the tile magnetic fasteners 408. In some implementations, the physical interface object(s) 120 may include two tilemagnetic fasteners 408 a/408 c on a top side and/or two tilemagnetic fasteners 408 b/408 d on a bottom side. While in further implementations, other quantities of tile magnetic fasteners 408 are contemplated, such as a single tile magnetic fasteners 408. - In another example, a given programming tile may include the region magnetic fasteners 410 on the left and/or right side of the programming tile that allow the programming tile to magnetically couple with an adjacent tile as depicted in
FIG. 4A where thecommand region 402 may be magnetically coupled by the region magnetic fasteners 410 to thequantifier region 404. Non-limiting examples of a magnetic fastener include a magnet, a ferrous material, etc. Detachably fastening the physical interface object(s) 120 is advantageous as it allows a user to conveniently arrange a collection of objects in a logical form, drag a collection of fastened objects around thephysical activity scene 116 without the collection falling apart and quickly manipulate physical interface objects(s) 120 by allowing the collection of fastened objects to quickly and neatly be assembled, etc. - Further, physical interface object(s) 120 may include one or more alignment mechanisms to align the physical interface object(s) 120 with other physical interface object(s) 120 (e.g., vertically horizontally, etc.). For example, a first physical interface object 120 may include a
protrusion 411 on a bottom side which may be configured to mate with a recess (not shown for a following physical interface object 120, but may be similar to arecess 409 of the first physical interface object 120) of a following physical interface object 120 on a top side, and so on and so forth, although it should be understood that other suitable alignment mechanisms are also possible and contemplated (e.g., flat surfaces that are magnetically alignable, other compatible edge profiles (e.g., wavy surfaces, jagged surfaces, puzzle-piece shaped edges, other compatibly shaped protrusion(s) and/or recesses, other suitable fasteners (e.g., snaps, hooks, hook/repeat, etc.). As a further example, additional and/or alternative alignment mechanisms may include curved edges and protruding edges that are configured to nest within each other, etc. - In some implementations, the
detection engine 212 may classify regions using machine learning models and/or one or more visual attributes of the regions (e.g., color, graphics, number, etc.) into commands and quantifiers. This allows thedetection engine 212 to determine the actions, directionality, and/or numbers for the detected physical interface object(s) 120. -
FIG. 4B is agraphical representation 412 illustrating example physical interface object(s) 120 represented as various programming tiles 414-430. In some implementations, the programming tiles may include verb tiles that represent various commands and other command tiles, adverb tiles that modify the verb tile, and/or units of measurements or quantities.Programming tile 414 may represent a repeat command. The activity application(s) 214 may associate theprogramming tile 414 with a repeat command that causes a sequence of commands to be repeated. The repeat command may be represented in some implementations by two arrows forming a circular design on theprogramming tile 414. In some implementations, theprogramming tile 414 may be coupled with aquantifier region 404 causing the repeat command to be executed a number of times represented by thequantifier region 404. -
Programming tile 416 may represent a verb tile depicting a walk command that causes the activity application(s) 214 to cause a targetvirtual object 122 to move. The walk command may be represented in some implementations by an image of a character moving on theprograming tile 416. In some implementations, theprogramming tile 416 may be coupled with aquantifier region 404 causing the walk command to be executed a number of times represented by thequantifier region 404. -
Programming tile 418 may represent a verb tile depicting a jump command that causes the activity application(s) 214 to cause a targetvirtual object 122 to jump. The jump command may be represented in some implementations by an image of a character jumping on theprograming tile 418. In some implementations, theprogramming tile 418 may be coupled with aquantifier region 404 causing the jump command to be executed a number of times represented by thequantifier region 404. -
Programming tile 420 may represent a verb tile depicting a tool command that causes the activity application(s) 214 to cause a targetvirtual object 122 to interact with something in thevirtual scene 118 and/or perform an action. The tool command may be represented in some implementations by an image of a hand on theprograming tile 420. In some implementations, theprogramming tile 420 may be coupled with aquantifier region 404 causing the tool command to be executed a number of times represented by thequantifier region 404. -
Programming tile 422 may represent a verb tile depicting a magic command that causes the activity application(s) 214 to cause a targetvirtual object 122 to perform a predefined command associated with the magic command. The magic command may be one example of an event command, while additional events may be included other than the magic command, such as a celebration event, a planting event, an attack event, a flashlight event, a tornado event, etc. The magic command may be represented in some implementations by an image of stars on theprograming tile 422. In some implementations, theprogramming tile 422 may be coupled with aquantifier region 404 causing the magic command to be executed a number of times represented by thequantifier region 404. -
Programming tile 424 may represent a verb tile depicting a direction command that causes the activity application(s) 214 to perform a command in a specific direction in thevirtual scene 118. The direction command may be represented in some implementations by an image of an arrow on theprograming tile 424. In some implementations, theprogramming tile 424 may be coupled with acommand region 402 causing the command to be executed in a specific direction. -
Programming tile 426 may represent a tile depicting an if command that causes thedetection engine 212 to detect a specific situation and when the situation is present to perform a separate set of commands as indicated by the if command. The if command may be represented in some implementations by an exclamation point on theprograming tile 416. In some implementations, theprogramming tile 426 may allow if/then instances to be programmed into a sequence of physical interface object(s) 120. In some implementations, thedetection engine 212 may be configured to detect clusters of tiles separated by an if command, as described in more detail with reference toFIG. 5 . -
Programming tiles 430 may represent examples ofquantifier regions 404 depicting various numerical values. Thequantifier regions 404 may be coupled with other programming tiles to alter the amount of times a command may be executed. -
Programming tile 428 may represent an execution block that causes the activity application(s) 214 to execute the current sequence of physical interface object(s) 120. In some implementations, the execution block may have one or more states. Thedetection engine 212 may be configured to determine the state of the execution block, and cause the activity application(s) 214 to execute the set of commands in response to detecting a change in the state. For example, one state may be a pressed-state and another state may be an unpressed-state. In the unpressed-state, thedetection engine 212 may detect a visual indicator 432 that may optionally be included on the execution block. When a user interacts with the execution block, the visual indicator 432 may change causing thedetection engine 212 to detect the pressed-state. For example, when a user pushes a button on the execution block, it may cause the visual indicator 432 (shown as slots) to change colors, disappear, etc. prompting the activity application(s) 214 to execute the set of commands. - The execution block can additional or alternatively have a semi-pressed state, in which a user may be interacting with the execution block, but has not yet fully transitioned between a pressed-state and an unpressed-state. The execution block may further include a rubbish state, in which the
detection engine 212 may be unable to determine a state of the execution block and various parameters may be programmed for this state, such as waiting until a specific state change has been detected, inferring based on the arrangement of other physical interface object(s) 120 a reasonable state, etc. -
FIG. 4C is a side view of aset 434 of physical interface object(s) 120 a-120 c andFIG. 4D is a side view of astack 436 of the set of physical interface object(s) 120 a-120 c. As shown, a user may stack and/or unstack the physical interface object(s) 120 a-120 c for convenient storage and manipulation by nesting the physical interface object(s) together via compatible coupling portions. - In some implementations, each physical interface object(s) 120 may include compatible receiving portions 440 and engaging portions 438. The engaging portion 438 of a physical interface object 120 may be configured to engage with the receiving portion of an adjacently situated physical interface object 120 as shown in
FIG. 4D allowing the physical interface object(s) 120 to stack in a flush manner, with no protrusions or gaps between the stacked physical interface object(s) 120. In some cases, a parameter adjustment mechanism, such as one including adirection region 406, may form the engaging portion 438 and may be configured to engage with a correspondingly sized receiving portion 440, such as a recess, as shown in the illustrated embodiment, although it should be understood that the engaging portion 438 and receiving portions 440 may be discrete members of the physical interface object(s) 120. More particularly, the engaging portion 438 may include the parameter adjustment mechanism forming a protrusion protruding outwardly from a front surface of the physical interface object(s) 120. Each physical interface object(s) 120 a-120 c depicted in therepresentation 434 includes a corresponding receiving portion 440 that may include a recess formed in a bottom surface of the physical interface object(s) 120. The recess may be configured to receive the protrusion, allowing the protrusion to nest within the recess of the physical interface object(s) 120 when the physical interface object(s) 120 are stacked as shown inFIG. 4D . In some implementations, the physical interface object(s) 120 a-120(c) may magnetically couple when stacked. The magnetically coupling may occur based on top magnetic fasteners 442 and bottom magnetic fasteners 444 of adjacent physical interface object(s) 120 coupling together when the physical interface object(s) 120 a-102 c are in a stacked position as shown inFIG. 4D . -
FIG. 5 may be agraphical representation 500 representing an example sequence of physical interface object(s) 120. In some implementations, thedetection engine 212 may detect asequence 502 of physical interface object(s) 120. In some implementations, thedetection engine 212 may detect a sequence as including at least one command tile and an execution block. In the illustrated example, thesequence 502 includes multiple command tiles, quantifier tiles, and an execution block coupled together and representing a specific set of commands. In further implementations, other sequences of physical interface object(s) 120 are contemplated. In further implementations, thedetection engine 212 may be configured to identify separate clusters that are portions of the sequence, for example,cluster 504 may represent a separate set of commands to perform in response to an if tile indicating an interrupt and may cause the activity application(s) 214 to halt execution of a previous section of the sequence in order to execute thenew cluster 504 when conditions for the interrupt are satisfied. Thedetection engine 212 may be configured to determine various clusters and subroutines related to those clusters. - In some implementations, the
detection engine 212 may determine statistically likely locations for certain physical interface object(s) 120 based on the clustering. For example, two or more clusters may be represented by two branches of a sequence in the physical activity scene, and based on the clusters; thedetection engine 212 may determine two possible positions for an end object (e.g., play button.) The activity application(s) 214 may be configured to inject a candidate into the set of commands based on the possible positions of the object. For example, thedetection engine 212 may identify likely candidates for a missing physical interface object(s) 120 and the activity application(s) 214 may inject the likely candidate into the set of commands at the candidate location (e.g., the portion of the set of commands determined to be missing.) In further implementations, if thedetection engine 212 detects that the sequence of physical interface object(s) 120 exceed a boundary of thephysical activity scene 116, then thedetection engine 212 may use statistical probabilities of likely locations for an execution block and execute the commands associated with the detected physical interface object(s) 120. - In some implementations, the
detection engine 212 may determine if there are missing object candidates, determine approximate candidates, and populate the positions of the missing object candidates with the approximations. For example, in some cases, an end object (e.g., play button) at the end of a string of objects may go undetected, and thedetection engine 212 may automatically determine the absence of that object from likely positions, and add it as a candidate to those positions. -
FIG. 6 is a flowchart of anexample method 600 for virtualized tangible programming. At 602, the detectingengine 212 may detect an object included in image data received from thevideo capture device 110. In some implementations, the detection engine detects the objects by analyzing specific frames of a video file from the image data and performs object and line recognition to categorize the detected physical interface object(s) 120. At 604, thedetection engine 212 may perform a comparison between each of the detected physical interface object(s) 120 and a predefined set of object definitions. For example, thedetection engine 212 may compare identified graphical attributes to identify various portions of programming tiles, such as those described inFIG. 4B . In further implementations, thedetection engine 212 may identify a color of a physical interface object(s) 120 and/or other physical, detectable physical attribute(s) of the physical interface object(s) 120 (e.g., texture, profile, etc.), and identify the object based on the physical attribute(s) (e.g., color). - At 606, the
detection engine 212 may recognize one of more of the physical interface object(s) 120 as a visually quantified object and/or a visually unquantified object based on the comparisons. A visually quantified object may include a physical interface object(s) 120 that quantifies a parameter, such as a direction, a numerical value, etc. Visually quantified objects may includecommand regions 402 coupled withquantifier regions 404. In some implementations, visually quantified objects may also includecommand regions 402 that are generally coupled withquantifier regions 404, but are set to a default numerical value (such as “1”) when noquantifier region 404 is coupled to thecommand region 402. Visually unquantified objects may, in some cases not explicitly quantify parameters, or may quantify parameters in a manner that is different from the visually quantified objects. Visually unquantified objects may include physical interface object(s) 120 that thedetection engine 212 does not expect to be coupled with aquantifier region 404, such as anexecution block 428,magic tile 422, and/or iftile 426 as examples. - At 608, the
detection engine 212 may process thecommand region 402 and/or thequantifier region 404 for each visually quantified object and identify corresponding commands. The corresponding commands may include commands related tospecific command regions 402 and multipliers of the command related to quantities detected in thequantifier region 404. Thedetection engine 212 may use a specific set of rules to classify thecommand regions 402 and/or thequantifier regions 404 as described elsewhere herein. - At 610, in some implementations, the
detection engine 212 may further identify corresponding commands for each visually unquantified object, such as if/then commands for repeat tiles, magic commands for magic tiles, and/or detecting states for the execution block. - At 612, the detection engine may be configured to provide the detected commands to the activity application(s) 214 and the activity application(s) 214 may compile the commands into a set of commands that may be executed on the
computing device 104. The set of commands may include the specific sequence of the commands and the activity application(s) 214 may execute the sequence of commands in a linear fashion based on the order that the physical interface object(s) 120 were arranged in thephysical activity scene 116. In some implementations, the activity application(s) 214 may be configured to detect any errors when compiling the set of commands and provide alerts to the user when the set of commands would not produce a desired result. For example, if an executed set of commands would move a targetvirtual object 122 into an area determined to be out of bounds, then the activity application(s) 214 may cause the virtual scene to present an indication that the set of commands are improper. In further implementations, the activity application(s) 214 may provide prompts and suggestions in response to the set of commands being improper. The prompts and/or suggestions may be based on other user's history on a specific level, machine learning of appropriate responses, etc. -
FIG. 7 is a flowchart of an example method for virtualized tangible programming. At 702, the activity application(s) 214 may cause thedisplay 112 to present a virtual environment. In further implementations, the activity application(s) 214 may cause the display to present a target virtual object within the virtual environment, the virtual environment may be an environment displayed in at least a portion of the virtual scene. For example, the virtual environment may include a forest setting displayed on a graphical user interface, and the targetvirtual object 122 may be a virtual character in the forest setting. - At 704, the activity application(s) 214 may determine an initial state of the target
virtual object 122 in the virtual environment of the user interface. The initial state may be related to a specific location within the virtual environment, it may be an initial objective, a level, etc. For example, the targetvirtual object 122 may be present in the center of thedisplay 112 and the goal of the targetvirtual object 122 may be to interact with an additionalvirtual object 124 also displayed in the virtual environment. - At 706, the
video capture device 110 may capture an image of thephysical activity surface 116. The physical activity surface may include an arrangement of physical interface object(s) 120. In some implementations, thevideo capture device 110 may capture multiple images of thephysical activity surface 116 over a period of time to capture changes in the arrangement of the physical interface object(s) 120. - At 708, the
detection engine 212 may receive the image from thevideo capture device 110 and process the image to detect the physical interface object(s) 120 in specific orientations. For example, thedetection engine 212 may identify physical interface object(s) 120 that a user has arranged into a sequence. In further implementations, thedetection engine 212 may be configured to ignore objects present in thephysical activity scene 116 that are not oriented into a specific orientation. For example, if a user creates a sequence of physical interface object(s) 120 and pushes additional physical interface object(s) 120 to the side that were not used to create the sequence, then thedetection engine 212 may ignore the additional physical interface object(s) 120 even though they are detectable and recognized within thephysical activity scene 116. - At 710, the
detection engine 212 may compare the physical interface object(s) 120 in the specific orientation to a predefined set of instructions. The predefined set of instructions may include commands related to the virtual scene represented by each of the physical interface object(s) present within the sequence. In some implementations, the predefined set of instructions may only relate to specific physical interface object(s) 120 present within the sequence, while other physical interface object(s) 120 do not include instruction sets. In further implementations, the instructions sets may include determining which physical interface object(s) 120 are visually quantified objects and which are visually unquantified objects. In some implementations, the predefined set of instructions may be built. Building the instruction set includes generating one or more clusters of physical interface object(s) 120 based on relative positions and/or relative orientations of the objects and determining a sequence for the commands of the instructions based on the clusters. - At 712, the activity application(s) 214 may determine a command represented by the physical interface object(s) 120 in a specific orientation based on the comparison. In some implementations, determining a command may include identifying command regions and quantifier regions of specific physical interface object(s) 120, while in further implementations, alternative ways of determining commands may be used based on how the set of commands are defined.
- At 714, the activity application(s) 214 may determine a path through the virtual environment for the target
virtual object 122 based on the command. The determined path may be based on a set of rules and may include a prediction of what will happen when the command is executed in the virtual environment. In further implementations, the determined path may be the effect of a sequence of physical interface object(s) 120 prior to formal execution. For example, if the commands cause the target virtual object to move two blocks right and down one block to access a strawberry (additional virtual object 124) then the activity application(s) 214 may determine a path based on the commands causing the targetvirtual object 122 to perform these actions. - At 716, the activity application(s) 214 may cause the
display 122 to present a path projection within thevirtual scene 118 in the user interface for presentation to the user. The path projection may be a visual indication of the effects of the command, such as highlighting a block the command would cause the targetvirtual object 122 to move. In another example, the activity application(s) 214 may cause an additionalvirtual object 124 to change colors to signal to the user that the command would cause the targetvirtual object 122 to interact with the additionalvirtual object 124. -
FIGS. 8A-8D are agraphical representation 800 illustrating an example interface for virtualized tangible programming that includes progressive path highlighting. InFIG. 8A , thevirtual scene 118 includes a targetvirtual object 122 and includes apath projection 802 a showing a command that would cause the targetvirtual object 122 to perform an action on the current tile in thepath projection 802 a. InFIG. 8B , thepath projection 802 b has been extended a block representing a command that would cause the targetvirtual object 122 to move to the tile shown in thepath projection 802 b. InFIG. 8C , thepath projection 802 c, has been extended an additional block representing a command to move two tiles. In some implementations, the path projection 802 may update as additional physical interface object(s) 120 are added to a sequence and the commands are displayed in the path projection 802. In this specific example, thepath projection 802 c may have been displayed based on either an additional command tile being added to the sequence or a command tile receiving a quantifier region to multiply the amount of times the command to move is performed. InFIG. 8d , thepath projection 802 d, has been extended a tile to the right showing the addition of another command in a different direction. The path projection 802 shown in this example is merely illustrative, and various path projection based on a sequence of physical interface object(s) 120 are contemplated. -
FIG. 8E is agraphical representation 804 of an example interface for virtualized tangible programming. In the example interface, acommand detection window 806 is displayed showing the physical interface object(s) 120 detected in a sequence by thedetection engine 212. The activity application(s) 214 may display the identified sequence as a way of providing feedback to a user as commands are identified. By displaying the detected sequence in acommand detection window 806, the activity application(s) 214 may signal to a user when a sequence is detected, if there are detection errors, or additional commands for the user to review. -
FIG. 9A is a perspective view of anexample programming tile 900. Theprogramming tile 900 includes afirst portion 940 and asecond portion 941. Thefirst portion 940 includes acommand region 902. For example, as depicted, thecommand region 902 may include a visual indicator representing a repeat command (e.g., recursive arrows), which may be processed by thesystem 100, as discussed elsewhere herein. Thesecond portion 941 includes aquantifier region 904. For example, as depicted, thequantifier region 904 includes a visual indicator representing a numeral (e.g., the number 1), which may be processed by thesystem 100, as discussed elsewhere herein. - The
first portion 940 may comprise a body having a plurality of surfaces. For instance, as depicted, thefirst portion 940 may include afront surface 942, aback surface 960, afirst side surface 944, asecond side surface 945, athird side surface 946, and atile coupling portion 952 having one or more sides. One or more of the surfaces of thefirst portion 940 may include components of one or more tile alignment mechanisms. As discussed elsewhere herein, the tile alignment mechanism conveniently allows for the alignment of two adjacently situated tiles. In some cases, as two tiles are situated sufficiently close to one another such that the corresponding alignment components comprising the alignment mechanism can engage, the alignment mechanism alliance the two tiles so they engage properly. As a further example, the coupling of the two tiles may be assisted by compatible magnetic components included in the tiles that are configured to magnetically couple as the tiles are adjacently situated such that the alignment components may engage. The alignment mechanism can advantageously automatically align the tiles as the tiles become magnetically coupled. - As shown, the
front surface 942 may extend from thefirst side surface 944 to an edge of thetile coupling portion 952, as well as from thethird side surface 946 to thesecond side surface 945. Thefront surface 942 may bear and/or incorporate thecommand region 902. Thefirst side surface 944 may be connected to theback side surface 960 by thefirst side surface 944, thesecond side surface 945, thethird side surface 946, and/or the one or more sides of thetile coupling portion 952. In the depicted embodiment, thefirst side surface 944, thesecond side surface 945, and thethird side surface 946 are depicted as being perpendicular to thefront surface 942 in theback surface 960, although it should be understood that thesurfaces surfaces first portion 940 may be contiguous, and collectively form the outer sides of the body. - The
second portion 941 may comprise a body having a plurality of surfaces. For instance, as depicted, thesecond portion 941 may include afront surface 943, aback surface 961, afirst side surface 948, asecond side surface 947, athird side surface 949, and thetile coupling portion 954 having one or more sides. -
FIGS. 9B and 9C are perspective views of theprogramming tile 900 showing thefirst portion 940 and thesecond portion 941 of theprogramming tile 900 separated. In its separated form, the surfaces of thetile coupling portions tile coupling portion 952 may include side surfaces 956 and 957, and thetile coupling portion 954 include side surfaces 955 and 953. In some embodiments, thetile coupling portion 952 and thetile coupling portion 954 may be shaped in a way that they can compatibly engage and become aligned. Any suitable mechanism for aligning theportions programming tile 900 is contemplated. - For instance, as depicted in
FIGS. 9B and 9C , thetile coupling portion 952 may comprise a protruding portion and thetile coupling portion 954 may comprise a recessed portion. The protruding portion may include thesurface 957, which radially extends outwardly from a center point aligned with the side surfaces 956. The recessed portion may includesurface 953 that correspondingly radially recesses into the body of thesecond portion 941, thus extending inwardly from a center point aligned with the side surfaces 955. The side surfaces 956 and 955, and thecurved surfaces FIGS. 9A, 9D, and 9E . - In some implementations, the
second portion 941 may include one or more magnetic fasteners that are magnetically coupleable to one or more magnetic fasteners included in thefirst portion 940. As with the alignment mechanisms discussed herein with respect to the tangible physical object(s) 120, this advantageously allows thesecond portion 941 to be retained with thefirst portion 940 and resist inadvertent separation between theportions - In further embodiments, the
second portion 941 and thefirst portion 940 may be detachably coupled using additional and/or alternative fasteners, such as engagement and receiving components having predetermined shapes that are configured to snap together, clip together, hook together, or otherwise coupled to one another in a removable fashion. - The detachable/re-attachable nature of the first and
second portions second portion 942 change the counter of a loop command, as shown inFIGS. 9A, 9B, 9C, and 9D , which showquantifier regions 904 having different values (e.g., one, two, three, and four, etc.). - In some implementations, one or more sides of the
programming tile 900 may include one or more components of the stacking mechanism, as described elsewhere herein. For example, a bottom side of theprogramming tile 900 may include a bottom surface collectively comprised ofbottom surface 960 andbottom surface 961 of the first andsecond portions component 970 of the stacking mechanism that is configured to engage with one or more other compatible components, such that two or more tangible physical objects 120 can be stacked. For example, as shown, arecess 970 may be formed in the bottom surface. The recess may include aninner cavity sidewall 971 and a cavity end/bottom surface 972. The recess may be shaped to receive a compatibly shaped protrusion of another tangible physical object 120, as discussed elsewhere herein. While in this particular example, the stacking mechanism component is shown as a recess, it should be understood that other suitable options, such as those described with reference to the alignment mechanism, are applicable and encompassed hereby. -
FIGS. 10A and 10B are perspective views of anexample programming tile 1000. As shown, theprogramming tile 1000 may include a body having afront surface 1001, backsurface 1010, I tile engagingside surface 1009, andalignment side surface 1006, anend surface 1004, and aside surface 1002. Thefront surface 1001 and backsurface 1010 are connected via the side surfaces 1009, 1006, 1004, and 1002 in a similar manner to that described with reference to theprogramming tile 900, which will not be repeated here for the purpose of brevity. Theprogramming tile 1000 may include acommand region 902. In the depicted example, thecommand region 902 includes a visual indicator reflecting an if command, as discussed elsewhere herein, although other visual indicators are also possible and contemplated. - The
programming tile 1000, as depicted, includes atile coupling portion 1008. Thetile coupling portion 1008 is configured to couple with one or more sides of another tangible physical object 120. In some implementations, coupling theprogramming tile 1000 to another tile allows the user to augment, enhance, add to, etc., an action of the other tile (e.g., based on thecommand regions 902 of the respective tiles), as discussed elsewhere herein. - In some implementations, the
tile coupling portion 1008 may comprise a recessedsurface 1009 that is configured to mate with a corresponding outer surface of an adjacent programming tile, such assurface 948 of thesecond portion 941 of theprogramming tile 900, thesurface 1148 of the programming tile 1100 (e.g., seeFIG. 11 ), and/or any other suitable tile surfaces of any other applicable tiles, etc. -
FIGS. 11A and 11B are perspective views of anexample programming tile 1100. As depicted, theprogramming tile 1100 is comprised of a single tile, which includes afront surface 1142 having acommand region 902, side surfaces 944, 1146, 1148, and 1145, andbottom surface 1160. The front surface 142 is connected to thebottom surface number 1160 via the side surfaces 944, 1146, 1148, and 1145, in a manner similar to that discussed with reference toprogramming tile 900, which will not be repeated here for the purpose of brevity. As shown, thesurface 1146 includes analignment component 984, and thesurface 1145 includes thealignment component 982, as discussed elsewhere herein. - In
FIG. 11B , a bottom side of theprogramming tile 1100 includes acomponent 970 of the stacking mechanism discussed above. In contrast toFIG. 9D , where thecomponent 970 may extend across two or more discrete portions of the programming tile, and this non-limiting example, the stackingcomponent 970 is included in a single portion of the programming tile, to illustrate the flexibility of the design. However, it should be understood that the stacking portion may be included in one or more regions of the programming tile. For example, thecomponent 970 may comprise two or more receiving or engaging components (e.g., recesses or protrusions, other fastening components, etc.) configured to interact with two or more corresponding receiving or engaging components of the opposing surface of an adjacently situated programming, such as one on which theprogramming tile 1100 is being stacked. -
FIGS. 12A and 12B are perspective views of anexample programming tile 1200. Theprogramming tile 1200 may include afront surface 1241 including acommand region 902, side surfaces 944, 1245, 1248, and 1246, andbottom surface 1260. Thefront surface 1241 may include one or more visual indicators 432 (e.g., 432 a, 432 b, etc.), that are mechanically linked to a user-interactabletangible button 1224, such that when a user presses thebutton 1224, the one or more visual indicators are triggered. Upon being triggered, the visual indicators 432 may change their physical appearance, the change of which may be detected by thesystem 100, as discussed elsewhere herein. - In some implementations, the
button 1224 may be formed on the plate (e.g. not shown) within the body of theprogramming tile 1200, which may comprise a housing of a mechanical assembly that transmits the vertical movement of the button to the components comprising the visual indicators 432. For example, as shown, a visual indicator 432 may comprise an aperture 1222 (e.g., 1222 a, 1222 b, etc.) formed in thefront surface 1241 of theprogramming tile 1200, and a block 1220 (e.g., 1220 a, 1220 b, etc.) that is situated within the aperture 1222, thus filling the aperture 1222. As thebutton 1224 is pressed (e.g., by a user pressing thetop surface 1228 of thebutton 1224, which is coupled to the mechanical assembly via side(s) 1230 of the button) and recedes into the correspondingaperture 1226, formed in thefront surface 1241, and through which the button extends, the mechanical assembly transmits the movement to the block 1220 and corresponding recedes the block away from the front surface such that the aperture appears empty. - The state of the aperture (e.g., filled, empty) may be detected by the
system 100. Additionally or alternatively, the state of the button 1224 (e.g., pressed, semi-pressed, fully pressed), may similarly be detected by thesystem 100. Detection of such state changes may trigger execution of the program which is embodied by a collection of programming tiles including, in this case, theprogramming tile 1200. -
FIGS. 13A-13H are various views of anexample programming tile 1300. Theprogramming tile 1300 includes afirst portion 1340 and thesecond portion 941. Thesecond portion 941 is discussed with reference toFIGS. 9A-9E , so a detailed description of theportion 941 will not be repeated here for the purposes of brevity. Thefirst portion 1340 includes afront surface 1342, theside surface 944, theside surface 945, aside surface 1346, an incrementingportion 1310 having one or more side surfaces, and abottom surface 1360. Thefront surface 1342 is connected to theback surface 1316 via the side surfaces 944, 945, 1346, and/or surface(s) of the incrementingportion 1310. Theside surface 1346 includes thealignment component 984, and theside surface 945 includes thealignment component 982, as discussed elsewhere herein. Thefront surface 1342 includes acommand region 902. - Similar to
FIG. 9D , inFIG. 13D , theback surface 1360 of theprogramming tile 1300 may include a stacking component, such ascomponent 970.FIGS. 13C and 13D are profile views of theprogramming tile 1300, showing the incrementingportion 1310. As shown, in the depicted implementation, the incrementingportion 1310 protrudes outwardly from thefront surface 1342. For example, the incrementingportion 1310 includes one or more sides extending perpendicularly outwardly from thefront surface 1342 to atop surface 1314 of the incrementingportion 1310. In some implementations, the incrementingportion 1310 comprises a dial that is turnable by the user to adjust a parameter (e.g., the parameter region 906 (e.g., directional region 406)) associated with the command of thecommand region 902, as discussed elsewhere herein. For example, as shown acrossFIGS. 13A, 13B, 13F, 13G, and 13H , the dial may be turned by the user to position the visual indicator (e.g., an arrow in this case) included on thetop surface 1314 differently. - In
FIG. 13E , unexploded be as shown where thesecond portion 941 is separated away from thefirst portion 1340, exposing the side view of the incrementingportion 1310. As shown, the incrementingportion 1310 may include abase portion 1316 which includes a recess in which theturnable portion 1311 of the incrementingportion 1310 is inserted and in which theturnable portion 1311 rotates. Described another way, thebase portion 1316 may include a bowl like cavity into which theturnable portion 1311 is inserted. The cavity, and some implementations, may emulate a chase, and theturnable portion 1311 may rotate the ball bearings included along the perimeter of the chase, as one in the art would understand. In some implementations, theturnable portion 1311 may include snapping fasteners configured to snap to corresponding snapping fasteners included in thebase portion 1316 to retain theturnable portion 1311 in place. An outwardly facing portion of theturnable portion 1311 and thebase portion 1316 may comprise thetile coupling portion 952, such that thefirst portion 1340 may couple withother programming tile 1300 portions, such as thesecond portion 941, as discussed elsewhere herein. - This technology yields numerous advantages including, but not limited to, providing a low-cost alternative for developing a nearly limitless range of applications that blend both physical and digital mediums by reusing existing hardware (e.g., camera) and leveraging novel lightweight detection and recognition algorithms, having low implementation costs, being compatible with existing computing device hardware, operating in real-time to provide for a rich, real-time virtual experience, processing numerous (e.g., >15, >25, >35, etc.) physical interface object(s) 120 simultaneously without overwhelming the computing device, recognizing physical interface object(s) 120 with substantially perfect recall and precision (e.g., 99% and 99.5%, respectively), being capable of adapting to lighting changes and wear and imperfections in physical interface object(s) 120, providing a collaborative tangible experience between users in disparate locations, being intuitive to setup and use even for young users (e.g., 3+ years old), being natural and intuitive to use, and requiring few or no constraints on the types of physical interface object(s) 120 that can be processed.
- It should be understood that the above-described example activities are provided by way of illustration and not limitation and that numerous additional use cases are contemplated and encompassed by the present disclosure. In the above description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. However, it should be understood that the technology described herein may be practiced without these specific details. Further, various systems, devices, and structures are shown in block diagram form in order to avoid obscuring the description. For instance, various implementations are described as having particular hardware, software, and user interfaces. However, the present disclosure applies to any type of computing device that can receive data and commands, and to any peripheral devices providing services.
- In some instances, various implementations may be presented herein in terms of algorithms and symbolic representations of operations on data bits within a computer memory. An algorithm is here, and generally, conceived to be a self-consistent set of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
- It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout this disclosure, discussions utilizing terms including “processing,” “computing,” “calculating,” “determining,” “displaying,” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
- Various implementations described herein may relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, including, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, flash memories including USB keys with non-volatile memory or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.
- The technology described herein can take the form of a hardware implementation, a software implementation, or implementations containing both hardware and software elements. For instance, the technology may be implemented in software, which includes but is not limited to firmware, resident software, microcode, etc. Furthermore, the technology can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. For the purposes of this description, a computer-usable or computer readable medium can be any non-transitory storage apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
- A data processing system suitable for storing and/or executing program code may include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories that provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution. Input/output or I/O devices (including but not limited to keyboards, displays, pointing devices, etc.) can be coupled to the system either directly or through intervening I/O controllers.
- Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems, storage devices, remote printers, etc., through intervening private and/or public networks. Wireless (e.g., Wi-Fi′) transceivers, Ethernet adapters, and modems, are just a few examples of network adapters. The private and public networks may have any number of configurations and/or topologies. Data may be transmitted between these devices via the networks using a variety of different communication protocols including, for example, various Internet layer, transport layer, or application layer protocols. For example, data may be transmitted via the networks using transmission control protocol/Internet protocol (TCP/IP), user datagram protocol (UDP), transmission control protocol (TCP), hypertext transfer protocol (HTTP), secure hypertext transfer protocol (HTTPS), dynamic adaptive streaming over HTTP (DASH), real-time streaming protocol (RTSP), real-time transport protocol (RTP) and the real-time transport control protocol (RTCP), voice over Internet protocol (VOIP), file transfer protocol (FTP), WebSocket (WS), wireless access protocol (WAP), various messaging protocols (SMS, MMS, XMS, IMAP, SMTP, POP, WebDAV, etc.), or other known protocols.
- Finally, the structure, algorithms, and/or interfaces presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method blocks. The required structure for a variety of these systems will appear from the description above. In addition, the specification is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the specification as described herein.
- The foregoing description has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the specification to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. It is intended that the scope of the disclosure be limited not by this detailed description, but rather by the claims of this application. As will be understood by those familiar with the art, the specification may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Likewise, the particular naming and division of the modules, routines, features, attributes, methodologies and other aspects are not mandatory or significant, and the mechanisms that implement the specification or its features may have different names, divisions and/or formats.
- Furthermore, the modules, routines, features, attributes, methodologies and other aspects of the disclosure can be implemented as software, hardware, firmware, or any combination of the foregoing. Also, wherever an element, an example of which is a module, of the specification is implemented as software, the element can be implemented as a standalone program, as part of a larger program, as a plurality of separate programs, as a statically or dynamically linked library, as a kernel loadable module, as a device driver, and/or in every and any other way known now or in the future. Additionally, the disclosure is in no way limited to implementation in any specific programming language, or for any specific operating system or environment. Accordingly, the disclosure is intended to be illustrative, but not limiting, of the scope of the subject matter set forth in the following claims.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/138,651 US20210118313A1 (en) | 2016-05-24 | 2020-12-30 | Virtualized Tangible Programming |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201662341041P | 2016-05-24 | 2016-05-24 | |
US15/604,620 US10885801B2 (en) | 2016-05-24 | 2017-05-24 | Virtualized tangible programming |
US17/138,651 US20210118313A1 (en) | 2016-05-24 | 2020-12-30 | Virtualized Tangible Programming |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/604,620 Division US10885801B2 (en) | 2016-05-24 | 2017-05-24 | Virtualized tangible programming |
Publications (1)
Publication Number | Publication Date |
---|---|
US20210118313A1 true US20210118313A1 (en) | 2021-04-22 |
Family
ID=60418765
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/604,620 Expired - Fee Related US10885801B2 (en) | 2016-05-24 | 2017-05-24 | Virtualized tangible programming |
US17/138,651 Abandoned US20210118313A1 (en) | 2016-05-24 | 2020-12-30 | Virtualized Tangible Programming |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/604,620 Expired - Fee Related US10885801B2 (en) | 2016-05-24 | 2017-05-24 | Virtualized tangible programming |
Country Status (1)
Country | Link |
---|---|
US (2) | US10885801B2 (en) |
Families Citing this family (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
USD871419S1 (en) * | 2016-05-24 | 2019-12-31 | Tangible Play, Inc. | Display screen or portion thereof with a graphical user interface |
US10847046B2 (en) * | 2017-01-23 | 2020-11-24 | International Business Machines Corporation | Learning with smart blocks |
US11107367B2 (en) * | 2017-03-27 | 2021-08-31 | Apple Inc. | Adaptive assembly guidance system |
US11113989B2 (en) | 2017-03-27 | 2021-09-07 | Apple Inc. | Dynamic library access based on proximate programmable item detection |
US10983663B2 (en) * | 2017-09-29 | 2021-04-20 | Apple Inc. | Displaying applications |
JP6656689B2 (en) * | 2017-12-29 | 2020-03-04 | 株式会社アペイロン | Programming support tool |
JP6633115B2 (en) * | 2018-03-27 | 2020-01-22 | 合同会社オフィス・ゼロ | Program creation support system and method, and program therefor |
KR20200136930A (en) * | 2018-03-30 | 2020-12-08 | 모비디어스 리미티드 | Methods, systems, articles of manufacture and apparatus for creating digital scenes |
US20190340952A1 (en) * | 2018-05-02 | 2019-11-07 | Infitech Co., Ltd. | System for learning programming |
US20190362647A1 (en) * | 2018-05-24 | 2019-11-28 | Steven Brian Robinson | Magnetic Vinyl Sticker Coding Folder |
JP7219914B2 (en) * | 2019-01-28 | 2023-02-09 | 株式会社Icon | learning toys |
US11645944B2 (en) | 2018-07-19 | 2023-05-09 | Icon Corp. | Learning toy, mobile body for learning toy, and panel for learning toy |
JP7219906B2 (en) * | 2018-07-19 | 2023-02-09 | 株式会社Icon | Learning toy, mobile object for learning toy used for this, and portable information processing terminal for learning toy used for this |
JP2020014575A (en) * | 2018-07-24 | 2020-01-30 | 株式会社Icon | Learning toy, learning toy mobile used in the same, and learning toy panel used in the same |
US11498013B2 (en) | 2018-08-17 | 2022-11-15 | Sony Interactive Entertainment Inc. | Card, card reading system, and card set |
US11366514B2 (en) | 2018-09-28 | 2022-06-21 | Apple Inc. | Application placement based on head position |
KR102217922B1 (en) * | 2019-01-08 | 2021-02-19 | 주식회사 럭스로보 | A system for providing assembly information and a module assembly |
US20210004909A1 (en) * | 2019-07-01 | 2021-01-07 | The Travelers Indemnity Company | Systems and methods for real-time accident analysis |
US20210006730A1 (en) * | 2019-07-07 | 2021-01-07 | Tangible Play, Inc. | Computing device |
USD907032S1 (en) | 2019-07-07 | 2021-01-05 | Tangible Play, Inc. | Virtualization device |
CN113711175B (en) | 2019-09-26 | 2024-09-03 | 苹果公司 | Control display |
CN113661691B (en) | 2019-09-27 | 2023-08-08 | 苹果公司 | Electronic device, storage medium, and method for providing an augmented reality environment |
WO2020039413A2 (en) * | 2019-12-13 | 2020-02-27 | Universidad Técnica Particular De Loja | Puzzle-type device for learning computational thinking |
WO2021262507A1 (en) | 2020-06-22 | 2021-12-30 | Sterling Labs Llc | Displaying a virtual display |
JP6993531B1 (en) | 2021-07-12 | 2022-01-13 | ダイコク電機株式会社 | Teaching materials for programming learning and programming learning system |
KR102392584B1 (en) * | 2021-09-10 | 2022-04-29 | (주)코딩앤플레이 | control method for history-based coding education system |
CN114470780A (en) * | 2022-01-12 | 2022-05-13 | 北京字跳网络技术有限公司 | Level scene creating method and device, storage medium and electronic equipment |
GR1010562B (en) * | 2022-12-08 | 2023-10-31 | Αριστοτελειο Πανεπιστημιο Θεσσαλονικης - Ειδικος Λογαριασμος Κονδυλιων Ερευνας, | Hybrid programming system and method for use in a training robot |
WO2025018025A1 (en) * | 2023-07-20 | 2025-01-23 | 株式会社ソニー・インタラクティブエンタテインメント | Toy system, moving body, control method, and program |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110012661A1 (en) * | 2009-07-15 | 2011-01-20 | Yehuda Binder | Sequentially operated modules |
US20150258435A1 (en) * | 2014-03-11 | 2015-09-17 | Microsoft Corporation | Modular construction for interacting with software |
US9953546B1 (en) * | 2014-04-11 | 2018-04-24 | Google Llc | Physical coding blocks |
US20190095178A1 (en) * | 2016-03-03 | 2019-03-28 | Seok Ju CHUN | Electronic block kit system for scratch programming |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6175954B1 (en) * | 1997-10-30 | 2001-01-16 | Fuji Xerox Co., Ltd. | Computer programming using tangible user interface where physical icons (phicons) indicate: beginning and end of statements and program constructs; statements generated with re-programmable phicons and stored |
US20130217491A1 (en) * | 2007-11-02 | 2013-08-22 | Bally Gaming, Inc. | Virtual button deck with sensory feedback |
US9298427B2 (en) * | 2010-01-06 | 2016-03-29 | Microsoft Technology Licensing, Llc. | Creating inferred symbols from code usage |
US9268535B2 (en) * | 2013-03-12 | 2016-02-23 | Zheng Shi | System and method for computer programming with physical objects on an interactive surface |
US20140297035A1 (en) * | 2013-04-01 | 2014-10-02 | Tufts University | Educational robotic systems and methods |
KR101546927B1 (en) * | 2014-05-07 | 2015-08-25 | 김진욱 | Apparatus for educating algorithm with block |
KR101892356B1 (en) * | 2016-10-04 | 2018-08-28 | (주)모션블루 | Apparatus and mehtod for providng coding education using block |
US10460622B2 (en) * | 2017-05-26 | 2019-10-29 | Adigal LLC | Assisted programming using an interconnectable block system |
-
2017
- 2017-05-24 US US15/604,620 patent/US10885801B2/en not_active Expired - Fee Related
-
2020
- 2020-12-30 US US17/138,651 patent/US20210118313A1/en not_active Abandoned
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110012661A1 (en) * | 2009-07-15 | 2011-01-20 | Yehuda Binder | Sequentially operated modules |
US20150258435A1 (en) * | 2014-03-11 | 2015-09-17 | Microsoft Corporation | Modular construction for interacting with software |
US9953546B1 (en) * | 2014-04-11 | 2018-04-24 | Google Llc | Physical coding blocks |
US20190095178A1 (en) * | 2016-03-03 | 2019-03-28 | Seok Ju CHUN | Electronic block kit system for scratch programming |
Also Published As
Publication number | Publication date |
---|---|
US10885801B2 (en) | 2021-01-05 |
US20170344127A1 (en) | 2017-11-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210118313A1 (en) | Virtualized Tangible Programming | |
US20230415030A1 (en) | Virtualization of Tangible Interface Objects | |
US20230343092A1 (en) | Virtualization of Tangible Interface Objects | |
US10984576B2 (en) | Activity surface detection, display and enhancement of a virtual scene | |
US11538220B2 (en) | Tangible object virtualization station | |
US20210232298A1 (en) | Detection and visualization of a formation of a tangible interface object | |
EP3417358B1 (en) | Activity surface detection, display and enhancement of a virtual scene | |
US20200233503A1 (en) | Virtualization of tangible object components | |
US20240005594A1 (en) | Virtualization of tangible object components |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED |
|
AS | Assignment |
Owner name: TANGIBLE PLAY, INC, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HU, FELIX;KANORIA, VIVARDHAN;BREJEON, ARNAUD;SIGNING DATES FROM 20170524 TO 20170612;REEL/FRAME:055034/0768 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
AS | Assignment |
Owner name: GLAS TRUST COMPANY LLC, NEW JERSEY Free format text: SECURITY INTEREST;ASSIGNOR:TANGIBLE PLAY, INC.;REEL/FRAME:060257/0811 Effective date: 20211124 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: GLAS TRUST COMPANY LLC, NEW JERSEY Free format text: SECURITY INTEREST;ASSIGNORS:CLAUDIA Z. SPRINGER, AS CHAPTER 11 TRUSTEE OF EPIC! CREATIONS, INC., ON BEHALF OF THE ESTATE OF DEBTOR EPIC! CREATIONS, INC.;CLAUDIA Z. SPRINGER, AS CHAPTER 11 TRUSTEE OF NEURON FUEL, INC., ON BEHALF OF THE ESTATE OF DEBTOR NEURON FUEL, INC.;CLAUDIA Z. SPRINGER, AS CHAPTER 11 TRUSTEE OF TANGIBLE PLAY, INC., ON BEHALF OF THE ESTATE OF DEBTOR TANGIBLE PLAY, INC.;REEL/FRAME:069290/0456 Effective date: 20241031 |