Nothing Special   »   [go: up one dir, main page]

WO2013016707A1 - Interactive digital content applications - Google Patents

Interactive digital content applications Download PDF

Info

Publication number
WO2013016707A1
WO2013016707A1 PCT/US2012/048721 US2012048721W WO2013016707A1 WO 2013016707 A1 WO2013016707 A1 WO 2013016707A1 US 2012048721 W US2012048721 W US 2012048721W WO 2013016707 A1 WO2013016707 A1 WO 2013016707A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
dimensional
annotation
images
view
Prior art date
Application number
PCT/US2012/048721
Other languages
French (fr)
Inventor
Kristen SALVATORE
David Rees
Jeremy Williams
Original Assignee
Future Us, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Future Us, Inc. filed Critical Future Us, Inc.
Publication of WO2013016707A1 publication Critical patent/WO2013016707A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/426Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving on-screen location information, e.g. screen coordinates of an area at which the player is aiming with a light gun
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/303Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device for displaying additional data, e.g. simulating a Head Up Display
    • A63F2300/305Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device for displaying additional data, e.g. simulating a Head Up Display for providing a graphical or textual hint to the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6009Methods for processing data by generating or executing the game program for importing or creating game content, e.g. authoring tools during game development, adapting content to different platforms, use of a scripting language to create content
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6661Methods for processing data by generating or executing the game program for rendering three dimensional images for changing the position of the virtual camera
    • A63F2300/6669Methods for processing data by generating or executing the game program for rendering three dimensional images for changing the position of the virtual camera using a plurality of virtual cameras concurrently or sequentially, e.g. automatically switching between fixed virtual cameras when a character change rooms
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6661Methods for processing data by generating or executing the game program for rendering three dimensional images for changing the position of the virtual camera
    • A63F2300/6676Methods for processing data by generating or executing the game program for rendering three dimensional images for changing the position of the virtual camera by dedicated player input

Definitions

  • the present disclosure relates to creating digital content and more specifically to creating interactive digital content associated with replicating and simulating the target interactive digital application.
  • Digital applications such as electronic games and other computer executable content, often include a plethora of features.
  • developers generally provide a user guide.
  • Many user guides especially those for games or highly complex applications, can be rather limited in content. In some cases only providing limited explanatory content is strategic, such as with a game where some of the appeal is derived from the unknown.
  • application developers as well as third-party content providers develop and publish supplemental user guides, such as strategy guides.
  • user guides Even though user guides contain content regarding the use and features of digital applications, which involve user interaction, user guides are generally entirely flat. That is, if the user guide is available in a digital format, it includes very limited interactivity, if any at all. For example, a user guide can be a PDF full of text and still images. Some newer user guides can contain videos, but even with this more dynamic content, the user still remains a mostly passive observer. This is in stark contrast to the user's activities related to the interactive digital application. Thus, while the user guides are likely informative and teach the user about the interactive digital application, they fail to replicate the actions and activities that a user will perform while interacting with the digital publication. Such a limitation can restrict the effectiveness of a user guide.
  • An interactive digital content application can be any computer executable application designed to include content about or directly from a target interactive digital application, such as audio, video, text, and/or images.
  • a digital content application can include one or more interactive elements that require a user to take an active role with respect to the content presented.
  • An advantage of a digital content application is that a user can interact with the content in a risk free environment. For example, a user is able to explore paths through an electronic game, identify best routes, or identify mistakes to avoid using fewer resources, such as time, real or virtual currency, etc. Additionally, a user is able to learn and/or experiment in an environment outside of the view of other users.
  • a digital content application can contain one or more content sections. Each section can present the content using one of a variety of different presentation formats.
  • a presentation format can be an interactive presentation format. Examples of possible presentation formats can include active view, branching video, effects viewer, concept gallery, and/or multi-way view. Additionally, more passive presentation formats can also be used, such as video trailers, interviews, FAQs, etc.
  • the branching video presentation format can be used to include one or more related video segments in a section.
  • the video segments can include any content, such as developer commentary or gameplay from a particular section of an electronic game.
  • One or more video segments contained in the section can be presented to a user. The user can view the available video segments in any order.
  • a branching video presentation format can be multi-level. That is, upon completing a video segment, one or more additional video segments can be presented that are related to the just completed video segment.
  • a possible use of a multi-level branching video presentation format can be to present different paths and/or strategies through a section of an electronic game. A user can then follow different paths through the branching video to see the effects of decisions and/or strategies.
  • various gameplay statistics can be carried through from one segment to the next so that a user can see which path proved to be most successful.
  • the active view presentation format can be used to present a three-dimensional view of one or more scenes in a digital application without the use of a 3D engine.
  • the three-dimensional view can allow a user to explore all angles of a scene. Once inside the three-dimensional scene a user can pan, scan, and/or zoom to explore the various aspects of the scene.
  • the three-dimensional scene can include audio from the target application.
  • the audio can be a subset of the audio from the target application.
  • the included audio can be just the ambient sounds isolated from active sounds, such as active game elements, control panel item, animations, etc.
  • a three-dimensional scene can be constructed from a set of screenshots.
  • Each screenshot can be captured from a location in a scene in a target digital application, but from a different angle.
  • the set of screenshots can then be mapped to the inside of a three- dimensional space such as a sphere or cube to create a full three-dimensional view of the scene.
  • a scene in a target digital application can include a variety of content that obstructs the scene, such as heads-up display (HUD) elements, control- related elements, or active animations.
  • the three-dimensional scene can be constructed to not include scene obstructing content.
  • a rendered three-dimensional scene can include one or more annotations that a user can activate to reveal additional content. In some cases, activating an annotation can reveal audio, video, and/or text content.
  • an annotation can also be a transition annotation.
  • a transition annotation can be used to connect one three-dimensional scene to another.
  • the transition between two three-dimensional scenes can include displaying transition content, such as a video, including captured video or simulations of one or more actual transitions from the digital application.
  • the effects viewer presentation format can be used to present the effect of one or more digital application settings, hardware types, and/or hardware configurations.
  • the effects viewer presentation format can be used to allow the user to explore the difference in scene rendering when different graphics cards are used.
  • a scene can include multiple versions of an image depicting the same scene only with different settings. When the scene is rendered a portion of one or more of the images can be used in the rendering.
  • the left half of the rendered scene can be displayed using a first image and the right half can be rendered using a second image.
  • the user can then move around a demarcating object, such as a slider bar (or shape or series of shapes), to alter where each image is rendered.
  • the concept gallery presentation format can be used to present a gallery of images designed to be explored in detail by a user.
  • a user can pan, scan, and/or zoom.
  • an image can be associated with audio content that can play while the user is exploring the image.
  • the audio component can include narration describing different aspects of image and/or features of the digital application exposed in the image.
  • a user's exploration of an image can be independent of any audio narration.
  • the multi-way view presentation format can be used to present multiple content items that each illustrates the same feature of a target digital application, but using a different perspective.
  • the multi-way presentation format can be used to illustrate the relative effectiveness of different gameplay strategies through multiple gameplay videos.
  • a user can select one or more of the content items to discover information about the feature from the chosen perspective.
  • one or more statistics or data points can be revealed.
  • a section configured using the multi-way presentation format can also include a voting feature. The voting feature can allow a user to vote on a content item and see the global vote tally.
  • the digital content application can include an interactive opening screen that presents the available sections.
  • the opening screen can include a section icon for each available section.
  • a section icon can include a variety of information about the section, such as a representative image or video, a summary of the section's content, whether the section is locked, a completion rate, number of user "likes," number of user comments, etc.
  • the information can be displayed in a pop-up window associated with the section icon.
  • Each section icon can be unique and/or of a different size.
  • the opening screen can be created automatically based on the number of sections in a digital content application and a representative image, a weight, and/or other information associated with each section. The associated weights can be used in determining the placement and/or size of a section icon.
  • a digital content application can include a number of other social networking type features, such as "likes" and comments or achievements.
  • the digital content application as a whole can have a comment feature, and each section in the digital content application can have its own associated comment section.
  • a user can post a comment specific to the section and see comments posted by other users.
  • a comment can be directly linked to content in a section, such as a specific time in a video.
  • one or more users can post comments associated with specific times in a video. As the video plays, the comments can be displayed and/or highlighted when the video reaches the specific times. Similar functionality can be included for a "likes" feature.
  • FIG. 1 illustrates an example system embodiment
  • FIG. 2 illustrates an exemplary configuration of devices and a network
  • FIG. 3 illustrates an exemplary opening screen for a digital content application
  • FIG. 4 illustrates an exemplary opening screen in which a pop-up box has been activated
  • FIG. 5 illustrates an exemplary opening screen for a section that uses the branching video presentation format
  • FIG. 6 illustrates an exemplary section that uses a multi-level branching video format
  • FIG. 7 illustrates a first exemplary view in a three-dimensional scene from a section that uses the active view presentation format
  • FIG. 8 illustrates a second exemplary view in a three-dimensional scene from a section that uses the active view presentation format
  • FIG. 9 illustrates a third exemplary view in a three-dimensional scene from a section that uses the active view presentation format
  • FIG. 10 illustrates a fourth exemplary view in a three-dimensional scene from a section that uses the active view presentation format
  • FIG. 11 illustrates a first exemplary section using the effects view presentation format
  • FIG. 12 illustrates a second exemplary section using the effects view presentation format
  • FIG. 13 illustrates a first exemplary section using the concept gallery presentation format
  • FIG. 14 illustrates a second exemplary section using the concept gallery presentation format
  • FIG. 15 illustrates an exemplary section using the concept gallery presentation format in which the user has zoomed in on a portion of an image
  • FIG. 16 illustrates an exemplary section that uses the multi-way presentation format
  • FIG. 17 illustrates an exemplary set of multi-way content items
  • FIG. 18 illustrates an exemplary comment section associated with a section in a digital content application
  • FIG. 19 illustrates an exemplary section with a user control bar that includes a "like" button
  • FIG. 20 illustrates an exemplary overall social statistics tally for a section.
  • the present disclosure addresses the need in the art for a way to develop digital content that teaches a user about a target interactive digital application while replicating activities that the user will do when interacting with the target digital application.
  • the disclosure first sets forth a discussion of a basic general purpose system or computing device in FIG. 1 that can be employed to practice the concepts disclosed herein before turning to a detailed description of the techniques for creating the features of a digital content application.
  • an exemplary system 100 includes a general-purpose computing device 100, including a processing unit (CPU or processor) 120 and a system bus 110 that couples various system components including the system memory 130 such as read only memory (ROM) 140 and random access memory (RAM) 150 to the processor 120.
  • the system 100 can include a cache 122 connected directly with, in close proximity to, or integrated as part of the processor 120.
  • the system 100 copies data from the memory 130 and/or the storage device 160 to the cache for quick access by the processor 120. In this way, the cache provides a performance boost that avoids processor 120 delays while waiting for data.
  • These and other modules can control or be configured to control the processor 120 to perform various actions.
  • Other system memory 130 may be available for use as well.
  • the memory 130 can include multiple different types of memory with different performance characteristics. It can be appreciated that the disclosure may operate on a computing device 100 with more than one processor 120 or on a group or cluster of computing devices networked together to provide greater processing capability.
  • the processor 120 can include any general purpose processor and a hardware module or software module, such as module 1 162, module 2 164, and module 3 166 stored in storage device 160, configured to control the processor 120 as well as a special-purpose processor where software instructions are incorporated into the actual processor design.
  • the processor 120 may essentially be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc.
  • a multi-core processor may be symmetric or asymmetric.
  • the system bus 110 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures.
  • a basic input/output (BIOS) stored in ROM 140 or the like may provide the basic routine that helps to transfer information between elements within the computing device 100, such as during start-up.
  • the computing device 100 further includes storage devices 160 such as a hard disk drive, a magnetic disk drive, an optical disk drive, tape drive or the like.
  • the storage device 160 can include software modules 162, 164, 166 for controlling the processor 120. Other hardware or software modules are contemplated.
  • the storage device 160 is connected to the system bus 110 by a drive interface.
  • a hardware module that performs a particular function includes the software component stored in a non-transitory computer-readable medium in connection with the necessary hardware components, such as the processor 120, bus 110, output device 170, and so forth, to carry out the function.
  • the basic components are known to those of skill in the art and appropriate variations are contemplated depending on the type of device, such as whether the device 100 is a small; a handheld computing device, e.g. mobile phone, smart phone, tablet; a desktop computer; an electronic game console; or a computer server.
  • Non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se.
  • an input device 190 represents any number of input mechanisms, such as a microphone for speech, a touch- sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth.
  • An output device 170 can also be one or more of a number of output mechanisms known to those of skill in the art.
  • multimodal systems enable a user to provide multiple types of input to communicate with the computing device 100.
  • the communications interface 180 generally governs and manages the user input and system output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed.
  • the illustrative system embodiment is presented as including individual functional blocks including functional blocks labeled as a "processor" or processor 120.
  • the functions these blocks represent may be provided through the use of either shared or dedicated hardware, including, but not limited to, hardware capable of executing software and hardware, such as a processor 120, that is purpose-built to operate as an equivalent to software executing on a general purpose processor.
  • the functions of one or more processors presented in FIG. 1 may be provided by a single shared processor or multiple processors.
  • Illustrative embodiments may include microprocessor and/or digital signal processor (DSP) hardware, read-only memory (ROM) 140 for storing software performing the operations discussed below, and random access memory (RAM) 150 for storing results.
  • DSP digital signal processor
  • ROM read-only memory
  • RAM random access memory
  • VLSI Very large scale integration
  • the logical operations of the various embodiments are implemented as: (1) a sequence of computer implemented steps, operations, or procedures running on a programmable circuit within a general use computer, (2) a sequence of computer implemented steps, operations, or procedures running on a specific-use programmable circuit; and/or (3) interconnected machine modules or program engines within the programmable circuits.
  • the system 100 shown in FIG. 1 can practice all or part of the recited methods, can be a part of the recited systems, and/or can operate according to instructions in the recited non-transitory computer-readable storage media.
  • Such logical operations can be implemented as modules configured to control the processor 120 to perform particular functions according to the programming of the module. For example, FIG.
  • Modi 162, Mod2 164 and Mod3 166 which are modules configured to control the processor 120. These modules may be stored on the storage device 160 and loaded into RAM 150 or memory 130 at runtime or may be stored as would be known in the art in other computer-readable memory locations.
  • a digital content application can be any computer executable code in which at least a portion of the content contained in the digital content application is associated with at least one digital application, e.g. an electronic game, a client application, a server application, a digital product, etc.
  • a digital content application can include audio, video, text, and/or images, as well as various interactive elements.
  • the interactive features can be any features that require a user to take an active role with respect to what content is displayed by the digital content application.
  • a digital content application can include one or more sections, where each section can have one or more sub-sections. The content within each section and/or sub-section can include interactive elements. Additionally, interactive elements can connect sections and/or sub-sections.
  • a digital content application can include one or more network- based features, such as social networking features.
  • the network-based features can include voting, comments, "likes," etc. Through the network-based a community can be created based around the digital content application.
  • the goal of a digital content application can be to include content about or directly from a target digital application that informs a user about the target digital application.
  • a digital content application can include scenes or levels from an electronic game.
  • a digital content application can include a video of live action within an electronic game.
  • the one or more sections can be designed to teach a user about the target digital application in an interactive manner that can replicate the activities a user will engage in when interacting with the actual target digital application.
  • FIG. 2 illustrates an exemplary system configuration 200 for distribution and use of digital content applications.
  • a configuration is particularly useful for digital applications that include one or more network based features, such as social networking features.
  • a digital content application can be created on, distributed by, and/or executed on a general-purpose computing device like system 100 in FIG. 1.
  • a digital content application system 208 can communicate with one or more client devices 2021, 202 2 , 202 n (collectively "202") connected to a network 204 by direct and/or indirect communication.
  • the digital content application system 208 can support connections from a variety of different client devices, such as desktop computers; mobile computers; handheld communications devices, e.g. mobile phones, smart phones, tablets; electronic game consoles, and/or any other network enabled communications device.
  • client devices such as desktop computers; mobile computers; handheld communications devices, e.g. mobile phones, smart phones, tablets; electronic game consoles, and/or any other network enabled communications device.
  • digital content application system 208 can concurrently accept connections from and interact with multiple client devices 202.
  • the digital content application system 208 can be configured to manage the distribution of a digital content application.
  • the digital content application system 208 can receive one or more digital content applications from one or more digital content application providers 206i, 206 2 , 206 n (collectively "206").
  • the digital content application system 208 can be configured to store a copy of a digital content application.
  • the digital content application system 208 can request a copy of a digital content application from a digital content application provider 206.
  • the digital content application system 208 can then send the digital content application to the requesting client device 202.
  • the digital content application system 208 can also be configured to manage any data associated with network-based features in a digital application.
  • a digital application can include one or more social networking features, such as comments, voting, and/or "likes.”
  • social networking features such as comments, voting, and/or "likes.”
  • the data associated with the feature should be passed along to the other users of the digital application.
  • a comment should be made visible to the users.
  • the digital content application system 208 can receive the data and distribute to all active digital content applications.
  • the digital content application system 208 can be configured to support push and/or pull data distribution. Additionally, the digital content application system 208 can maintain a database of the data.
  • digital content application system 208 and digital content application providers 206 are presented herein as separate entities, this is for illustrative purposes only. In some cases, the digital content application system 208 and the digital content application providers 206 can be the same entity. Thus, a single entity can create a digital application, distribute the digital application to one or more client devices, and manage any network-based features associated with the use of a digital application on one or more client devices 202.
  • a digital content application can include one or more sections.
  • Each section can include content associated with the same or different target applications.
  • each section can present the content using a different presentation format. Examples of possible presentation formats include active view, branching video, effects viewer, concept gallery, and/or multi-way view. Each of these presentation formats will be discussed in greater detail below. Additional presentation formats are also possible, such as video trailers, interviews, FAQs, etc.
  • the digital content application can include an opening screen that presents the available sections.
  • Each available section can be presented as an icon, a window, and/or text.
  • FIG. 3 illustrates an exemplary opening screen 300 for a digital content application.
  • the digital content application illustrated in FIG. 3 includes six sections 302, 304, 306, 308, 310, and 312.
  • the section icons can be arranged in a cluster, a linear sequence, and/or any other arrangement.
  • Each section icon can be unique.
  • each section icon can display a representative image of the section.
  • each section icon can include dynamic content, such as an indication as to whether the user has started and/or completed the section, whether the section or a subsection within the section is locked, the number of associated user comments.
  • a section icon can display a percentage representing the user's completion rate for the section, such as completion percentage 314.
  • a section icon can be greyed out to represent that a section is locked.
  • a section icon can display a number of user comments associated with the section, such as comment indicator 316.
  • each section icon can be a different size.
  • a user can select a section icon to activate the content in the section.
  • a user can mouse or roll over a section icon to reveal additional information, such as a pop-up information box that includes text, audio, and/or video describing the section.
  • FIG. 4 illustrates an exemplary opening screen 300 in which a pop-up box 402 has been activated by rolling over section icon 306.
  • pop-up box 402 displays information 404 about the digital application featured in the section as well as a description 406 of the content in the section.
  • the opening screen can be presented each time the digital application is launched. Additionally, the opening screen can serve as a home screen, where upon completion or exit from a section, control is transferred to the opening screen so that the user can select a new section to use.
  • An opening screen can be created automatically for each digital content application based on the number of sections in a digital content application.
  • Each section can have an associated content item, such as a representative image, image sequence, video, and/or text.
  • the associated content item can be displayed in the section icon or window on the opening screen.
  • each section can have an associated weight.
  • a section can be assigned a weight based on the value of the information in the section or the level of importance of the section, e.g. a section containing more detailed information or a greater amount of information can be assigned a higher weight.
  • a section can be assigned a weight based on the amount of interactivity in the section, e.g. a section with more interactive elements can be assigned a higher weight.
  • a digital content application developer can assign the weight.
  • the weight can be automatically computed based on the presence or absence of features in a section.
  • a weight can be automatically computed based on the number of interactive elements in a section or based on the number of interactive elements in a section relative to the other sections. In another example, a weight can be automatically computed based on the length of a section.
  • the arrangement and/or sizes of the section icons can be determined at least in part based on the associated weights. For example, a section with a greater associated weight can have a larger section icon, such as section icon 204. In another example, a section with a greater weight can be placed in a more prominent position on the opening screen. Additionally, the arrangement and/or size can be influenced by a suggested use order. For example, the section icons can be arranged such that the section that is suggested to be used first is most prominent or placed first in a sequence. In some cases, the suggest use order can be part of the weight. Alternatively, a suggested use order can be associated with a section. For example, each section can be assigned a number in a sequence. Then the associated sequence numbers can influence the opening screen arrangement.
  • the layout of the opening screen can be designed to be static. That is, the layout of the section icons on the screen does not change from one execution to another.
  • the opening screen can also be created to have a dynamic layout.
  • the placement of the sections can rotate with each use of the digital content application, after a specified number of uses, or after a specified elapsed time.
  • the size of a section icon can change by changing the weight associated with a section assigned to a completed section or a higher weight can be assigned to a started but not completed section.
  • a presentation format can be branching video.
  • a section that uses the branching video presentation format can include one or more video segments from which the user can select.
  • FIG. 5 illustrates an exemplary opening screen 500 for a section that uses the branching video presentation format.
  • the section's opening screen can include an icon, button, link, etc., that a user can click on to select a video segment.
  • a video segment icon can include a representative image and/or text, such as video segment icon 502.
  • the video segments can contain any content, such as developer commentary or gameplay from a particular section of an electronic game.
  • a section that uses the branching video format can be configured to only present a subset of the video segments at any particular time. That is, the section can be configured with multiple levels of video segments. For example, after completing a first video segment, the section can present multiple new video segment options that logically follow from the completed video segment.
  • FIG. 6 illustrates an exemplary section 600 that uses a multi-level branching video format. Initially, the section can present the first level video segments. After completing a first level video segment, the section can present the relevant second level video segments. Such a presentation scheme can continue until a leaf video segment has been completed. For example, if a user completes first level video segment 602, the section can present second level video segments 604. If a user selects and completes second level video segment 606, the user has reached a leaf so now new video segments can be presented. In some cases, different paths from a root video segment to a leaf video segment can have different lengths.
  • the video segments presented after completing a leaf video segment can vary with the configuration of the branching video section. In some cases, after completing a leaf video segment, control can be returned to the parent level. If the user has completed all of those videos, then the user can select to be returned to some other level. To decrease the burden on the user, after completing a leaf video segment, control can be returned to closest parent level that has unwatched and/or uncompleted video segments.
  • the section can be configured to indicate to the user which video segments the user has already completed.
  • a completed video segment can be greyed out.
  • a completed video segment can have a checkmark next to it. Additional methods of indicating that the user has already completed a video segment are also possible.
  • a possible use of a multi-level branching video presentation format can be to present different paths and/or strategies through a section of an electronic game.
  • a section of an electronic game such as a level, can be broken up into video segments based on decision points in the electronic game.
  • a user can then see the change in the gameplay that can occur when different decisions are made.
  • various gameplay statistics can be carried through from one segment to the next so that a user can see which path proved to be most successful.
  • a goal of a section using the multi-level branching video approach to illustrate an electronic game can be for the user to explore the video segments to identify a successful path through the illustrated game section. For example, it may be that a certain sequence of decisions will result in the most optimal path through the level, such as based on speed of completion, length of survival time, most points or awards earned, most achievements, or some other goal.
  • the branching video sequence can be designed to allow the user to explore the different decisions to identify the most successful sequence. After completing the branching video section, the user can use the information learned in the user's own gameplay of the electronic game.
  • a branching video sequence can allow a user to learn using fewer resources, such as time, real or virtual currency, etc. Additionally, a branching video sequence can be presented in an environment outside of the view of other user, thus allowing the user to learn and/or explore in a risk-free environment.
  • a presentation format can be active view.
  • a section that uses the active view presentation format can present a three-dimensional view of a scene without the use of a 3D-graphics-rendering engine.
  • the three-dimensional view can allow a user to explore all angles of a scene. Once inside the three-dimensional scene a user can pan, scan, and/or zoom to explore the various aspects of the scene.
  • the active view format can be used to present a section of an electronic game. The scene can be presented to the user as if the user was standing at a location in the game. The user can then look up, down, and all around to see everything that is around the user at that location.
  • a three-dimensional scene can include audio from the target digital application, such as the audio that would be playing while the user was in the scene.
  • the audio used in a three-dimensional scene can be stripped of sounds associated with active or control-related elements present in the scene in the target digital application, such as from characters or inanimate objects speaking, moving or appearing, so that only ambient sounds remain.
  • a three-dimensional scene can be constructed from a set of screenshots.
  • the three-dimensional scene can be constructed by mapping the set of screenshots to the inside of a three-dimensional space such as a sphere or cube.
  • the panoramic screenshots should provide full coverage of the desired scene.
  • One possible method for capturing sufficient coverage can be to pick a location in the scene in the target digital application. Point the camera straight down and take the first panoramic screenshot. After the first shot, rotate the camera to the right until only about 25 percent of the previous screenshot in visible on the left of the screen and capture another panoramic screenshot. Continue rotating right capturing panoramic screenshots until the camera has returned to the starting position.
  • a scene in a target digital application can include a variety of content that obstructs the scene.
  • many electronic games include a heads-up display (HUD) to display a variety of gameplay statistics, such as available weapons, ammunition, health, score, maps, time, capabilities, game progression, speedometer, compass, etc.
  • the HUD elements can obscure the view of the scene. Therefore, it may be desirable to eliminate any HUD elements from the panoramic screenshots prior to constructing the three-dimensional scene.
  • the HUD elements can be removed by disabling the HUD elements prior to capturing the screenshots.
  • the HUD elements can be removed through a post-processing step.
  • the HUD elements can be removed by increasing the amount of overlap between successive screenshots so that when the screenshots are put together, the screenshots are overlapped in such a manner that HUD elements are eliminated.
  • some target digital applications can include animations, such as enemies or moving obstacles. Therefore, to create a clear view of the scene it may be necessary to disable and/or freeze any animations prior to capturing the screenshots.
  • FIG. 7 illustrates an exemplary view 700 in a three-dimensional scene from a section that uses the active view presentation format.
  • An annotation can include audio, video, and/or text.
  • a user can mouse over, click on, or otherwise activate the content associated with an annotation.
  • FIG. 7 includes an audio annotation 702.
  • a user can mouse over the audio annotation 702 to reveal an information pop-up window 704. If the user is interested in the content of the audio annotation, the user can click on the audio annotation 702 to activate the audio.
  • FIG. 8 illustrates another exemplary view 800 in the three-dimensional scene.
  • view 800 the scene has been rotated to the right (by the character looking / being- navigated to the left). Additionally, the view 800 includes an information annotation 802. The user can mouse over or click on the annotation 802 to reveal the information box 804.
  • a section that uses the active presentation format can include multiple three- dimensional scenes.
  • one or more three-dimensional scenes can be connected.
  • a scene in an electronic game can include a point where a user can enter a building, a tunnel, etc.
  • the section can model the electronic games environment through the use of two three-dimensional scenes.
  • the three- dimensional scenes can be connected through a transition annotation.
  • FIG. 9 illustrates an exemplary view 900 in a section that uses the active view presentation format.
  • the view 900 includes a transition annotation 902.
  • a user can mouse over the transition annotation 902, to reveal information about the transition, such as pop-up window 904 that provides a hint about the transition destination.
  • a transition can include an effect, such as an explosion that opens a hole into the next room or three-dimensional scene. Once in the new three-dimensional scene, the user can explore in the same manner as the previous three-dimensional scene.
  • two three-dimensional scenes in the section may not be directly connected in the target digital application.
  • the two three-dimensional scenes may have been selected because they include particularly difficult or interesting features, while the section connecting the two scenes is less interesting.
  • the transition between the two three-dimensional scenes can be a video. Therefore, the user is still able to visualize the relationship between the three-dimensional scenes, but is not burdened with having to activate multiple transition annotations to find the interesting three- dimensional scenes.
  • a number of hints and/or user controls can be displayed to the user to reveal the available features and/or how the user can interact with a rendered scene and/or the section.
  • the hint window can be disabled after a predefined period so that the user has an unobstructed view of the rendered scene.
  • a hint window can be displayed only upon initial entry into a section using the active view format. However, if different scenes in the section require different user actions in order to interact with the scene, then a new hint window can be displayed.
  • a digital application can display a control bar, such as control bar 1002 so that the user can control any audio, move to another scene, and/or explore the scene.
  • a control bar can remain visible.
  • a digital application can be configured such that the control bar is only visible when a mouse pointer is active and/or when a mouse pointer is active in a predefined area, such as in the area of the control bar. Additional techniques for enabling and/or disabling a visual representation of a control bar are also possible.
  • Another presentation format can be an effects viewer.
  • a section that uses the effects viewer presentation format can include one or more scenes designed such that a user is able to explore the differences between various digital application settings, hardware types, and/or hardware configurations.
  • a scene can include multiple versions of an image depicting the same scene only with different settings. For example, a first image can be captured on a device with a first graphics card type and a second image can be captured on a device with a second graphics card type. In another example, a first image can be captured with night vision enabled while the second image can be captured with night vision disabled.
  • the images can be layered one on top of the other in a scene.
  • the scene can then include functionality that allows a user to view the rendered scene where different layers are revealed for different portions of the rendered scene.
  • the left half of the rendered scene can be displayed using the image captured on a device with a first graphics card while the right half can be displayed using the image captured using the second graphics card.
  • Which layer is revealed for which portion of the rendered scene can be demarcated using one or more slider bars or any other geometric, graphic or organic shapes, and/or any combination thereof.
  • a user can move a slider bar left and right to alter which of portions of two different layers are used in rendering the scene.
  • a geometric shape can be based on the viewable area as seen through night vision goggles.
  • Additional geometric, graphic, and/or organic shapes are also possible, such as a circle, a thematic shape such as a shield in the case of a medieval game, or peels of an onion.
  • a series of shapes can also be used, such as to look like sections of peel.
  • FIG. 11 illustrates an exemplary section 1100 using the effects view presentation format in which a user can explore the difference between two different settings on a rendered scene 1102.
  • the rendered scene can include a slider bar 1104 that a user can move right and left.
  • On the left side of the slider bar 1104 is the rendered scene 1102 as captured with the first settings 1106 while on the right side of the slider bar 1104 is the rendered scene 1102 as captured with the second settings 1108.
  • the user moves the slider bar 1104 to the right, more of the scene 1102 is rendered using the first settings F06.
  • FIG. 12 illustrates the exemplary section 1100 in which the slider bar 1104 has been moved to the right.
  • a scene configured for use in an effects view section can include more than two images.
  • a scene can be rendered with multiple slider bars or other demarcating objects that allow the scene to be rendered as a combination of each of the multiple images.
  • a scene can include three images and two slider bars, thus creating three different sections in the rendered scene.
  • the scene can also be configured such that a user can select two of the settings to compare. In this case, the scene can be rendered using the two images that correspond to the selected settings.
  • hint window 1110 informs the user know that the slider bar 1104 can be moved right and left to alter the rendering of the scene.
  • the hint window 1110 can be disabled after a predefined period so that the user has an unobstructed view of the rendered scene. For example, in FIG. 12 the hint window is no longer obstructing the display of the rendered scene 1104.
  • a hint window can be displayed only upon initial entry into a section using the effects view format. However, if different scenes in the section require different user actions in order to interact with the scene, then a new hint window can be displayed.
  • a digital application can display a control bar 1112 so that the user can control any audio, move to another scene, and/or explore the scene.
  • a control bar can remain visible.
  • a digital application can be configured such that the control bar is only visible when a mouse pointer is active and/or when a mouse pointer is active in a predefined area, such as in the area of the control bar. Additional techniques for enabling and/or disabling a visual representation of a control bar are also possible.
  • a presentation format can be concept gallery.
  • a section that uses the effects viewer presentation format can include a gallery of images. That is, a section using the concept gallery format can include one or more images from one or more digital applications.
  • a concept gallery section can include one or more images from a single digital application, such as images from different levels in an electronic game.
  • a concept gallery section can include one or more images from different digital applications, such as a single image from multiple similar digital applications so that a user can see the difference between the applications.
  • Each image can be captured so as to present a particular feature of the digital application. For example, an image can capture a particular point in a game that includes a number interesting and/or challenging features.
  • An image can be a two-dimensional image that includes effects to give the appearance of a three-dimensional image.
  • an image can have an associated audio component.
  • the audio component can include narration describing different aspects of image and/or features of the digital application exposed in the image.
  • FIG. 13 illustrates an exemplary section 1300 using the concept gallery presentation format in which an image 1302 is displayed.
  • a user's exploration of an image can be independent of any audio narration. That is, if an audio narration is discussing a particular aspect of the image, the user is able to explore a different aspect of the image.
  • a number of hints and/or user controls can be displayed to the user to reveal the available features and/or how the user can interact with an image and/or the section.
  • hint window 1304 lets the user know that the image can be explored by panning and scanning across the image.
  • the hint window 1304 can be disabled after a predefined period so that the user has an unobstructed view of the image.
  • FIG. 14 illustrates an exemplary section 1300 using the concept gallery presentation format in which the hint window is no longer obstructing the display of image 1302.
  • a hint window can be displayed only upon initial entry into a section using the concept gallery format. However, if different images in the section require different user actions in order to interact with the image, then a new hint window can be displayed.
  • a digital application can display a control bar 1306 so that the user can control any audio, move to another image, and/or explore the image. In some cases, a control bar can remain visible.
  • a digital application can be configured such that the control bar is only visible when a mouse pointer is active and/or when a mouse pointer is active in a predefined area, such as in the area of the control bar. Additional techniques for enabling and/or disabling a visual representation of a control bar are also possible.
  • FIG. 15 illustrates an exemplary section 1300 using the concept gallery presentation format in which a user has zoomed in on an aspect of the image 1302.
  • a user can zoom in using a mouse action, such as double clicking, double tapping, pinching, scrolling, etc.
  • the control bar can be configured to include a zoom feature, such as zoom control 1502 in control bar 1306.
  • the level of zoom permitted for an image can be based on the image resolution. For example, a digital content application that includes high- resolution images can include greater zoom functionality.
  • Another presentation format can be multi-way view.
  • a section that uses the multi-way presentation format can include multiple content items that each illustrates the same feature of a digital application but using a different perspective.
  • the multi-way presentation format can be used to illustrate the relative effectiveness of different gameplay strategies.
  • each content item can be a video.
  • a game player can complete a section of a digital application using a particular game play strategy.
  • a video can include player commentary.
  • FIG. 16 illustrates an exemplary section 1600 that uses the multi-way presentation format.
  • Section 1600 includes three illustrative videos 1602, 1604, and 1606.
  • a section using the multi-way presentation format can include any number of content items.
  • a user can select one or more of the content items to discover information about the feature from the chosen perspective.
  • one or more statistics or data points can be revealed. For example, after completing a strategy video for an electronic game, various game play statistics can be displayed, such as time to completion, resources used, achievements accomplished, change in health level, etc.
  • FIG. 17 illustrates an exemplary set of multi-way content items 1700 after the user has viewed each content item at least once. After the content items have been viewed, the data points are revealed, such as statistics 1702.
  • a section configured using the multi-way presentation format can also include a voting feature.
  • the voting feature can allow a user to vote on a content item, such as the content item they found most useful or like the best. For example, a user can vote on the gameplay strategy the user liked best.
  • the voting can be accomplished through vote buttons associated with each content item, such as vote button 1704. At some point a global vote count can be revealed to the user, such as vote count 1706. In some cases, the overall voting results can be hidden from a user until the user casts a vote. Additionally, the overall vote tally can be updated in real time and/or at periodic intervals.
  • a digital content application can also include a number of different social features, such as "likes" and comments. These social features can be in addition to those already mentioned above, such as the voting feature in the multi-way presentation format.
  • the digital content application can include one or more comment features.
  • the digital content application as a whole can have a comment feature, and each section in the digital content application can have its own associated comment section.
  • a user can post a comment specific to the section and see comments posted by other users.
  • FIG. 18 illustrates an exemplary comment section 1800 associated with a section in a digital content application.
  • a comment can be directly linked to content in a section.
  • a user can associate a comment with a particular video segment in a branch video section.
  • a commenting feature can be configured so that a comment is associated with a specific time in a video. Then during video playback, the comments can be highlighted or revealed when the video reaches the point at which the comments were made.
  • Each section in the digital content application can also have an associated "likes” feature.
  • individual content within a section can have a "likes” feature.
  • FIG. 19 illustrates an exemplary section 1900 with a user control bar 1902 that includes a "like” button 1904. A user can click the "like” button to indicate that the user likes the particular content.
  • the "likes" feature can include a “dislike” button. Additional methods of including user approval or disapproval are also possible.
  • a digital content application can be configured with various ways to display the social features, such as likes, dislikes, votes, comments, etc.
  • the display can be an overall tally.
  • FIG. 20 illustrates an exemplary overall social statistics tally.
  • a section icon on the opening screen can have an associated pop-up window with information regarding the section.
  • active view section 2002 can have an associated pop-up information window 2004.
  • the window 2004 can include a social statistics tally 2006. Additional methods of displaying social statistics are also possible.
  • a digital application can include an achievements feature.
  • the achievements feature can be configured such that as a user completes the various sections and/or sub-sections, the user is awarded a bonus.
  • a bonus can be unlocking additional content, such as new game strategy hints.
  • a bonus can also be a badge or some other achievement level indicator that can give the user status within the digital content application's social community.
  • Embodiments within the scope of the present disclosure may also include tangible and/or non-transitory computer-readable storage media for carrying or having computer- executable instructions or data structures stored thereon.
  • Such non-transitory computer- readable storage media can be any available media that can be accessed by a general purpose or special purpose computer, including the functional design of any special purpose processor as discussed above.
  • non- transitory computer-readable media can include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code means in the form of computer-executable instructions, data structures, or processor chip design.
  • Computer-executable instructions include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions.
  • Computer- executable instructions also include program modules that are executed by computers in stand-alone or network environments.
  • program modules include routines, programs, components, data structures, objects, and the functions inherent in the design of special-purpose processors, etc. that perform particular tasks or implement particular abstract data types.
  • Computer-executable instructions, associated data structures, and program modules represent examples of the program code means for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps.
  • Embodiments of the disclosure may be practiced in network computing environments with many types of computer system configurations, including personal computers, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, videogame consoles, network PCs, minicomputers, mainframe computers, and the like. Embodiments may also be practiced in distributed computing environments where tasks are performed by local and remote processing devices that are linked (either by hardwired links, wireless links, or by a combination thereof) through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The effectiveness of user guide type content, such as strategy guides for electronic games, can be improved through the development and distribution of interactive digital content applications. An interactive digital content application can be designed to include content about or directly from a target digital application, such as audio, video, text, and/or images. Additionally, a digital content application can include one or more interactive elements that require a user to take an active role with respect to the content presented. To facilitate the interactivity, a digital content application can be designed to include one or more interactive presentation formats, which can include active view, branching video, effects viewer, concept gallery, and/or multi-way view.

Description

INTERACTIVE DIGITAL CONTENT APPLICATIONS
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority to U.S. Provisional Patent Application Serial No. 61/512,870, entitled, "PC GAMER", filed on 28 July 2011, which is hereby incorporated herein by reference in its entirety.
BACKGROUND
1. Technical Field
[0002] The present disclosure relates to creating digital content and more specifically to creating interactive digital content associated with replicating and simulating the target interactive digital application.
2. Introduction
[0003] Digital applications, such as electronic games and other computer executable content, often include a plethora of features. To aid users in navigating the application and its features, developers generally provide a user guide. Many user guides, especially those for games or highly complex applications, can be rather limited in content. In some cases only providing limited explanatory content is strategic, such as with a game where some of the appeal is derived from the unknown. However, there are many users that seek assistance in mastering the features of a digital application. To address this need, application developers as well as third-party content providers develop and publish supplemental user guides, such as strategy guides.
[0004] Even though user guides contain content regarding the use and features of digital applications, which involve user interaction, user guides are generally entirely flat. That is, if the user guide is available in a digital format, it includes very limited interactivity, if any at all. For example, a user guide can be a PDF full of text and still images. Some newer user guides can contain videos, but even with this more dynamic content, the user still remains a mostly passive observer. This is in stark contrast to the user's activities related to the interactive digital application. Thus, while the user guides are likely informative and teach the user about the interactive digital application, they fail to replicate the actions and activities that a user will perform while interacting with the digital publication. Such a limitation can restrict the effectiveness of a user guide. SUMMARY
[0005] Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be obvious from the description, or can be learned by practice of the herein disclosed principles. The features and advantages of the disclosure can be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the disclosure will become more fully apparent from the following description and appended claims, or can be learned by the practice of the principles set forth herein.
[0006] The effectiveness of user guide type content, such as strategy guides for electronic games, can be improved through the development and distribution of interactive digital content applications. An interactive digital content application can be any computer executable application designed to include content about or directly from a target interactive digital application, such as audio, video, text, and/or images. To increase user interactivity, a digital content application can include one or more interactive elements that require a user to take an active role with respect to the content presented. An advantage of a digital content application is that a user can interact with the content in a risk free environment. For example, a user is able to explore paths through an electronic game, identify best routes, or identify mistakes to avoid using fewer resources, such as time, real or virtual currency, etc. Additionally, a user is able to learn and/or experiment in an environment outside of the view of other users.
[0007] To facilitate design and development of the interactive elements, a digital content application can contain one or more content sections. Each section can present the content using one of a variety of different presentation formats. In some cases, a presentation format can be an interactive presentation format. Examples of possible presentation formats can include active view, branching video, effects viewer, concept gallery, and/or multi-way view. Additionally, more passive presentation formats can also be used, such as video trailers, interviews, FAQs, etc.
[0008] The branching video presentation format can be used to include one or more related video segments in a section. The video segments can include any content, such as developer commentary or gameplay from a particular section of an electronic game. One or more video segments contained in the section can be presented to a user. The user can view the available video segments in any order. In some configurations, a branching video presentation format can be multi-level. That is, upon completing a video segment, one or more additional video segments can be presented that are related to the just completed video segment. A possible use of a multi-level branching video presentation format can be to present different paths and/or strategies through a section of an electronic game. A user can then follow different paths through the branching video to see the effects of decisions and/or strategies. Furthermore, various gameplay statistics can be carried through from one segment to the next so that a user can see which path proved to be most successful.
[0009] The active view presentation format can be used to present a three-dimensional view of one or more scenes in a digital application without the use of a 3D engine. The three-dimensional view can allow a user to explore all angles of a scene. Once inside the three-dimensional scene a user can pan, scan, and/or zoom to explore the various aspects of the scene. Additionally, the three-dimensional scene can include audio from the target application. In some cases, the audio can be a subset of the audio from the target application. For example, the included audio can be just the ambient sounds isolated from active sounds, such as active game elements, control panel item, animations, etc. A three-dimensional scene can be constructed from a set of screenshots. Each screenshot can be captured from a location in a scene in a target digital application, but from a different angle. The set of screenshots can then be mapped to the inside of a three- dimensional space such as a sphere or cube to create a full three-dimensional view of the scene. In some cases, a scene in a target digital application can include a variety of content that obstructs the scene, such as heads-up display (HUD) elements, control- related elements, or active animations. The three-dimensional scene can be constructed to not include scene obstructing content. Additionally, a rendered three-dimensional scene can include one or more annotations that a user can activate to reveal additional content. In some cases, activating an annotation can reveal audio, video, and/or text content. However, an annotation can also be a transition annotation. A transition annotation can be used to connect one three-dimensional scene to another. Furthermore, in some cases, the transition between two three-dimensional scenes can include displaying transition content, such as a video, including captured video or simulations of one or more actual transitions from the digital application. [0010] The effects viewer presentation format can be used to present the effect of one or more digital application settings, hardware types, and/or hardware configurations. For example, the effects viewer presentation format can be used to allow the user to explore the difference in scene rendering when different graphics cards are used. To allow a user to explore the differences, a scene can include multiple versions of an image depicting the same scene only with different settings. When the scene is rendered a portion of one or more of the images can be used in the rendering. For example, the left half of the rendered scene can be displayed using a first image and the right half can be rendered using a second image. The user can then move around a demarcating object, such as a slider bar (or shape or series of shapes), to alter where each image is rendered.
[0011] The concept gallery presentation format can be used to present a gallery of images designed to be explored in detail by a user. To explore an image, a user can pan, scan, and/or zoom. Additionally, an image can be associated with audio content that can play while the user is exploring the image. For example, the audio component can include narration describing different aspects of image and/or features of the digital application exposed in the image. A user's exploration of an image can be independent of any audio narration.
[0012] The multi-way view presentation format can be used to present multiple content items that each illustrates the same feature of a target digital application, but using a different perspective. For example, the multi-way presentation format can be used to illustrate the relative effectiveness of different gameplay strategies through multiple gameplay videos. A user can select one or more of the content items to discover information about the feature from the chosen perspective. In some embodiments, upon completion of a content item, such as a strategy video, one or more statistics or data points can be revealed. A section configured using the multi-way presentation format can also include a voting feature. The voting feature can allow a user to vote on a content item and see the global vote tally.
[0013] Additionally, when a digital content application includes multiple sections, the digital content application can include an interactive opening screen that presents the available sections. The opening screen can include a section icon for each available section. In some cases, a section icon can include a variety of information about the section, such as a representative image or video, a summary of the section's content, whether the section is locked, a completion rate, number of user "likes," number of user comments, etc. In some cases, the information can be displayed in a pop-up window associated with the section icon. Each section icon can be unique and/or of a different size. The opening screen can be created automatically based on the number of sections in a digital content application and a representative image, a weight, and/or other information associated with each section. The associated weights can be used in determining the placement and/or size of a section icon.
[0014] In addition to the voting feature, a digital content application can include a number of other social networking type features, such as "likes" and comments or achievements. For example, the digital content application as a whole can have a comment feature, and each section in the digital content application can have its own associated comment section. A user can post a comment specific to the section and see comments posted by other users. In some cases, a comment can be directly linked to content in a section, such as a specific time in a video. For example, one or more users can post comments associated with specific times in a video. As the video plays, the comments can be displayed and/or highlighted when the video reaches the specific times. Similar functionality can be included for a "likes" feature.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] In order to describe the manner in which the above-recited and other advantages and features of the disclosure can be obtained, a more particular description of the principles briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only exemplary embodiments of the disclosure and are not therefore to be considered to be limiting of its scope, the principles herein are described and explained with additional specificity and detail through the use of the accompanying drawings in which:
[0016] FIG. 1 illustrates an example system embodiment;
[0017] FIG. 2 illustrates an exemplary configuration of devices and a network;
[0018] FIG. 3 illustrates an exemplary opening screen for a digital content application;
[0019] FIG. 4 illustrates an exemplary opening screen in which a pop-up box has been activated; [0020] FIG. 5 illustrates an exemplary opening screen for a section that uses the branching video presentation format;
[0021] FIG. 6 illustrates an exemplary section that uses a multi-level branching video format;
[0022] FIG. 7 illustrates a first exemplary view in a three-dimensional scene from a section that uses the active view presentation format;
[0023] FIG. 8 illustrates a second exemplary view in a three-dimensional scene from a section that uses the active view presentation format;
[0024] FIG. 9 illustrates a third exemplary view in a three-dimensional scene from a section that uses the active view presentation format;
[0025] FIG. 10 illustrates a fourth exemplary view in a three-dimensional scene from a section that uses the active view presentation format;
[0026] FIG. 11 illustrates a first exemplary section using the effects view presentation format;
[0027] FIG. 12 illustrates a second exemplary section using the effects view presentation format;
[0028] FIG. 13 illustrates a first exemplary section using the concept gallery presentation format;
[0029] FIG. 14 illustrates a second exemplary section using the concept gallery presentation format;
[0030] FIG. 15 illustrates an exemplary section using the concept gallery presentation format in which the user has zoomed in on a portion of an image;
[0031] FIG. 16 illustrates an exemplary section that uses the multi-way presentation format;
[0032] FIG. 17 illustrates an exemplary set of multi-way content items;
[0033] FIG. 18 illustrates an exemplary comment section associated with a section in a digital content application;
[0034] FIG. 19 illustrates an exemplary section with a user control bar that includes a "like" button; and
[0035] FIG. 20 illustrates an exemplary overall social statistics tally for a section. DETAILED DESCRIPTION
[0036] Various embodiments of the disclosure are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the disclosure.
[0037] The present disclosure addresses the need in the art for a way to develop digital content that teaches a user about a target interactive digital application while replicating activities that the user will do when interacting with the target digital application. The disclosure first sets forth a discussion of a basic general purpose system or computing device in FIG. 1 that can be employed to practice the concepts disclosed herein before turning to a detailed description of the techniques for creating the features of a digital content application.
[0038] With reference to FIG. 1, an exemplary system 100 includes a general-purpose computing device 100, including a processing unit (CPU or processor) 120 and a system bus 110 that couples various system components including the system memory 130 such as read only memory (ROM) 140 and random access memory (RAM) 150 to the processor 120. The system 100 can include a cache 122 connected directly with, in close proximity to, or integrated as part of the processor 120. The system 100 copies data from the memory 130 and/or the storage device 160 to the cache for quick access by the processor 120. In this way, the cache provides a performance boost that avoids processor 120 delays while waiting for data. These and other modules can control or be configured to control the processor 120 to perform various actions. Other system memory 130 may be available for use as well. The memory 130 can include multiple different types of memory with different performance characteristics. It can be appreciated that the disclosure may operate on a computing device 100 with more than one processor 120 or on a group or cluster of computing devices networked together to provide greater processing capability. The processor 120 can include any general purpose processor and a hardware module or software module, such as module 1 162, module 2 164, and module 3 166 stored in storage device 160, configured to control the processor 120 as well as a special-purpose processor where software instructions are incorporated into the actual processor design. The processor 120 may essentially be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric.
[0039] The system bus 110 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. A basic input/output (BIOS) stored in ROM 140 or the like, may provide the basic routine that helps to transfer information between elements within the computing device 100, such as during start-up. The computing device 100 further includes storage devices 160 such as a hard disk drive, a magnetic disk drive, an optical disk drive, tape drive or the like. The storage device 160 can include software modules 162, 164, 166 for controlling the processor 120. Other hardware or software modules are contemplated. The storage device 160 is connected to the system bus 110 by a drive interface. The drives and the associated computer readable storage media provide nonvolatile storage of computer readable instructions, data structures, program modules and other data for the computing device 100. In one aspect, a hardware module that performs a particular function includes the software component stored in a non-transitory computer-readable medium in connection with the necessary hardware components, such as the processor 120, bus 110, output device 170, and so forth, to carry out the function. The basic components are known to those of skill in the art and appropriate variations are contemplated depending on the type of device, such as whether the device 100 is a small; a handheld computing device, e.g. mobile phone, smart phone, tablet; a desktop computer; an electronic game console; or a computer server.
[0040] Although the exemplary embodiment described herein employs the hard disk 160, it should be appreciated by those skilled in the art that other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, digital versatile disks, cartridges, random access memories (RAMs) 150, read only memory (ROM) 140, a cable or wireless signal containing a bit stream and the like, may also be used in the exemplary operating environment. Non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se.
[0041] To enable user interaction with the computing device 100, an input device 190 represents any number of input mechanisms, such as a microphone for speech, a touch- sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth. An output device 170 can also be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems enable a user to provide multiple types of input to communicate with the computing device 100. The communications interface 180 generally governs and manages the user input and system output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed.
[0042] For clarity of explanation, the illustrative system embodiment is presented as including individual functional blocks including functional blocks labeled as a "processor" or processor 120. The functions these blocks represent may be provided through the use of either shared or dedicated hardware, including, but not limited to, hardware capable of executing software and hardware, such as a processor 120, that is purpose-built to operate as an equivalent to software executing on a general purpose processor. For example the functions of one or more processors presented in FIG. 1 may be provided by a single shared processor or multiple processors. (Use of the term "processor" should not be construed to refer exclusively to hardware capable of executing software.) Illustrative embodiments may include microprocessor and/or digital signal processor (DSP) hardware, read-only memory (ROM) 140 for storing software performing the operations discussed below, and random access memory (RAM) 150 for storing results. Very large scale integration (VLSI) hardware embodiments, as well as custom VLSI circuitry in combination with a general purpose DSP circuit, may also be provided.
[0043] The logical operations of the various embodiments are implemented as: (1) a sequence of computer implemented steps, operations, or procedures running on a programmable circuit within a general use computer, (2) a sequence of computer implemented steps, operations, or procedures running on a specific-use programmable circuit; and/or (3) interconnected machine modules or program engines within the programmable circuits. The system 100 shown in FIG. 1 can practice all or part of the recited methods, can be a part of the recited systems, and/or can operate according to instructions in the recited non-transitory computer-readable storage media. Such logical operations can be implemented as modules configured to control the processor 120 to perform particular functions according to the programming of the module. For example, FIG. 1 illustrates three modules Modi 162, Mod2 164 and Mod3 166 which are modules configured to control the processor 120. These modules may be stored on the storage device 160 and loaded into RAM 150 or memory 130 at runtime or may be stored as would be known in the art in other computer-readable memory locations.
[0044] Before disclosing a detailed description of the present technology, the disclosure turns to a brief introductory description of a general digital content application. A digital content application can be any computer executable code in which at least a portion of the content contained in the digital content application is associated with at least one digital application, e.g. an electronic game, a client application, a server application, a digital product, etc. A digital content application can include audio, video, text, and/or images, as well as various interactive elements. The interactive features can be any features that require a user to take an active role with respect to what content is displayed by the digital content application. Furthermore, a digital content application can include one or more sections, where each section can have one or more sub-sections. The content within each section and/or sub-section can include interactive elements. Additionally, interactive elements can connect sections and/or sub-sections.
[0045] In some cases, a digital content application can include one or more network- based features, such as social networking features. The network-based features can include voting, comments, "likes," etc. Through the network-based a community can be created based around the digital content application.
[0046] In general, the goal of a digital content application can be to include content about or directly from a target digital application that informs a user about the target digital application. For example, a digital content application can include scenes or levels from an electronic game. In another example, a digital content application can include a video of live action within an electronic game. Furthermore, the one or more sections can be designed to teach a user about the target digital application in an interactive manner that can replicate the activities a user will engage in when interacting with the actual target digital application. An advantage of a digital content application is that a user can interact with the content in a risk free environment. For example, a user is able to explore paths through an electronic game, identify best routes, or identify mistakes to avoid using fewer resources, such as time, real or virtual currency, etc. Additionally, a user is able to learn and/or experiment in an environment outside of the view of other users. [0047] FIG. 2 illustrates an exemplary system configuration 200 for distribution and use of digital content applications. Such a configuration is particularly useful for digital applications that include one or more network based features, such as social networking features. In the exemplary system configuration 200, a digital content application can be created on, distributed by, and/or executed on a general-purpose computing device like system 100 in FIG. 1.
[0048] In system 200, a digital content application system 208 can communicate with one or more client devices 2021, 2022, 202n (collectively "202") connected to a network 204 by direct and/or indirect communication. The digital content application system 208 can support connections from a variety of different client devices, such as desktop computers; mobile computers; handheld communications devices, e.g. mobile phones, smart phones, tablets; electronic game consoles, and/or any other network enabled communications device. Furthermore, digital content application system 208 can concurrently accept connections from and interact with multiple client devices 202.
[0049] The digital content application system 208 can be configured to manage the distribution of a digital content application. For example, the digital content application system 208 can receive one or more digital content applications from one or more digital content application providers 206i, 2062, 206n (collectively "206"). In some cases, the digital content application system 208 can be configured to store a copy of a digital content application. Alternatively, upon receiving a request for a digital content application from a client device 202, the digital content application system 208 can request a copy of a digital content application from a digital content application provider 206. The digital content application system 208 can then send the digital content application to the requesting client device 202.
[0050] The digital content application system 208 can also be configured to manage any data associated with network-based features in a digital application. For example, a digital application can include one or more social networking features, such as comments, voting, and/or "likes." When a user uses one of these features, the data associated with the feature should be passed along to the other users of the digital application. For example, a comment should be made visible to the users. To accomplish this, the digital content application system 208 can receive the data and distribute to all active digital content applications. The digital content application system 208 can be configured to support push and/or pull data distribution. Additionally, the digital content application system 208 can maintain a database of the data.
[0051] Although digital content application system 208 and digital content application providers 206 are presented herein as separate entities, this is for illustrative purposes only. In some cases, the digital content application system 208 and the digital content application providers 206 can be the same entity. Thus, a single entity can create a digital application, distribute the digital application to one or more client devices, and manage any network-based features associated with the use of a digital application on one or more client devices 202.
[0052] Having disclosed an introductory description of a general digital content application and a system that can facilitate the distribution and use of digital content applications, the disclosure now turns to a discussion of techniques for creating a variety of features within a digital content application. A person skilled in the relevant art will recognize that while the disclosure frequently uses strategy guides for electronic games to illustrate the present technology, the disclosed techniques can be applied to create informative digital content applications for a variety of different digital applications. For example, the present technology can also be used to create interactive digital advertisements for various digital products.
[0053] As described above, a digital content application can include one or more sections. Each section can include content associated with the same or different target applications. Furthermore, each section can present the content using a different presentation format. Examples of possible presentation formats include active view, branching video, effects viewer, concept gallery, and/or multi-way view. Each of these presentation formats will be discussed in greater detail below. Additional presentation formats are also possible, such as video trailers, interviews, FAQs, etc.
[0054] When a digital content application includes multiple sections, the digital content application can include an opening screen that presents the available sections. Each available section can be presented as an icon, a window, and/or text. FIG. 3 illustrates an exemplary opening screen 300 for a digital content application. The digital content application illustrated in FIG. 3 includes six sections 302, 304, 306, 308, 310, and 312. The section icons can be arranged in a cluster, a linear sequence, and/or any other arrangement. Each section icon can be unique. For example, each section icon can display a representative image of the section. Furthermore, each section icon can include dynamic content, such as an indication as to whether the user has started and/or completed the section, whether the section or a subsection within the section is locked, the number of associated user comments. For example, a section icon can display a percentage representing the user's completion rate for the section, such as completion percentage 314. In another example, a section icon can be greyed out to represent that a section is locked. In a further example, a section icon can display a number of user comments associated with the section, such as comment indicator 316.
[0055] Additionally, each section icon can be a different size. During execution of the digital application a user can select a section icon to activate the content in the section. Furthermore, in some cases, a user can mouse or roll over a section icon to reveal additional information, such as a pop-up information box that includes text, audio, and/or video describing the section. For example, FIG. 4 illustrates an exemplary opening screen 300 in which a pop-up box 402 has been activated by rolling over section icon 306. In this case, pop-up box 402 displays information 404 about the digital application featured in the section as well as a description 406 of the content in the section. The opening screen can be presented each time the digital application is launched. Additionally, the opening screen can serve as a home screen, where upon completion or exit from a section, control is transferred to the opening screen so that the user can select a new section to use.
[0056] An opening screen can be created automatically for each digital content application based on the number of sections in a digital content application. Each section can have an associated content item, such as a representative image, image sequence, video, and/or text. The associated content item can be displayed in the section icon or window on the opening screen.
[0057] Additionally, each section can have an associated weight. For example, a section can be assigned a weight based on the value of the information in the section or the level of importance of the section, e.g. a section containing more detailed information or a greater amount of information can be assigned a higher weight. In another example, a section can be assigned a weight based on the amount of interactivity in the section, e.g. a section with more interactive elements can be assigned a higher weight. A digital content application developer can assign the weight. However, in some cases, the weight can be automatically computed based on the presence or absence of features in a section. For example, a weight can be automatically computed based on the number of interactive elements in a section or based on the number of interactive elements in a section relative to the other sections. In another example, a weight can be automatically computed based on the length of a section.
[0058] The arrangement and/or sizes of the section icons can be determined at least in part based on the associated weights. For example, a section with a greater associated weight can have a larger section icon, such as section icon 204. In another example, a section with a greater weight can be placed in a more prominent position on the opening screen. Additionally, the arrangement and/or size can be influenced by a suggested use order. For example, the section icons can be arranged such that the section that is suggested to be used first is most prominent or placed first in a sequence. In some cases, the suggest use order can be part of the weight. Alternatively, a suggested use order can be associated with a section. For example, each section can be assigned a number in a sequence. Then the associated sequence numbers can influence the opening screen arrangement.
[0059] The layout of the opening screen can be designed to be static. That is, the layout of the section icons on the screen does not change from one execution to another. However, the opening screen can also be created to have a dynamic layout. For example, the placement of the sections can rotate with each use of the digital content application, after a specified number of uses, or after a specified elapsed time. In another example, the size of a section icon can change by changing the weight associated with a section assigned to a completed section or a higher weight can be assigned to a started but not completed section.
[0060] As mentioned above, a digital content application can use a number of different presentation formats to present content to a user. A presentation format can be branching video. A section that uses the branching video presentation format can include one or more video segments from which the user can select. For example, FIG. 5 illustrates an exemplary opening screen 500 for a section that uses the branching video presentation format. For each available video segment, the section's opening screen can include an icon, button, link, etc., that a user can click on to select a video segment. In some cases, a video segment icon can include a representative image and/or text, such as video segment icon 502. The video segments can contain any content, such as developer commentary or gameplay from a particular section of an electronic game.
[0061] A section that uses the branching video format can be configured to only present a subset of the video segments at any particular time. That is, the section can be configured with multiple levels of video segments. For example, after completing a first video segment, the section can present multiple new video segment options that logically follow from the completed video segment. FIG. 6 illustrates an exemplary section 600 that uses a multi-level branching video format. Initially, the section can present the first level video segments. After completing a first level video segment, the section can present the relevant second level video segments. Such a presentation scheme can continue until a leaf video segment has been completed. For example, if a user completes first level video segment 602, the section can present second level video segments 604. If a user selects and completes second level video segment 606, the user has reached a leaf so now new video segments can be presented. In some cases, different paths from a root video segment to a leaf video segment can have different lengths.
[0062] The video segments presented after completing a leaf video segment can vary with the configuration of the branching video section. In some cases, after completing a leaf video segment, control can be returned to the parent level. If the user has completed all of those videos, then the user can select to be returned to some other level. To decrease the burden on the user, after completing a leaf video segment, control can be returned to closest parent level that has unwatched and/or uncompleted video segments.
[0063] In some cases, the section can be configured to indicate to the user which video segments the user has already completed. For example, a completed video segment can be greyed out. In another example, a completed video segment can have a checkmark next to it. Additional methods of indicating that the user has already completed a video segment are also possible.
[0064] A possible use of a multi-level branching video presentation format can be to present different paths and/or strategies through a section of an electronic game. For example, a section of an electronic game, such as a level, can be broken up into video segments based on decision points in the electronic game. A user can then see the change in the gameplay that can occur when different decisions are made. Furthermore, various gameplay statistics can be carried through from one segment to the next so that a user can see which path proved to be most successful.
[0065] A goal of a section using the multi-level branching video approach to illustrate an electronic game can be for the user to explore the video segments to identify a successful path through the illustrated game section. For example, it may be that a certain sequence of decisions will result in the most optimal path through the level, such as based on speed of completion, length of survival time, most points or awards earned, most achievements, or some other goal. The branching video sequence can be designed to allow the user to explore the different decisions to identify the most successful sequence. After completing the branching video section, the user can use the information learned in the user's own gameplay of the electronic game. In some cases, a branching video sequence can allow a user to learn using fewer resources, such as time, real or virtual currency, etc. Additionally, a branching video sequence can be presented in an environment outside of the view of other user, thus allowing the user to learn and/or explore in a risk-free environment.
[0066] A presentation format can be active view. A section that uses the active view presentation format can present a three-dimensional view of a scene without the use of a 3D-graphics-rendering engine. The three-dimensional view can allow a user to explore all angles of a scene. Once inside the three-dimensional scene a user can pan, scan, and/or zoom to explore the various aspects of the scene. For example, the active view format can be used to present a section of an electronic game. The scene can be presented to the user as if the user was standing at a location in the game. The user can then look up, down, and all around to see everything that is around the user at that location. Additionally, in some cases, a three-dimensional scene can include audio from the target digital application, such as the audio that would be playing while the user was in the scene. In some cases, the audio used in a three-dimensional scene can be stripped of sounds associated with active or control-related elements present in the scene in the target digital application, such as from characters or inanimate objects speaking, moving or appearing, so that only ambient sounds remain.
[0067] A three-dimensional scene can be constructed from a set of screenshots. In some cases, the three-dimensional scene can be constructed by mapping the set of screenshots to the inside of a three-dimensional space such as a sphere or cube. To create a comprehensive three-dimensional view, the panoramic screenshots should provide full coverage of the desired scene. One possible method for capturing sufficient coverage can be to pick a location in the scene in the target digital application. Point the camera straight down and take the first panoramic screenshot. After the first shot, rotate the camera to the right until only about 25 percent of the previous screenshot in visible on the left of the screen and capture another panoramic screenshot. Continue rotating right capturing panoramic screenshots until the camera has returned to the starting position. After completing this first level, rotate the camera up until only about 25 percent of the previous level is visible at the bottom of the screen and capture a panoramic screenshot. Repeat the steps of rotating right capturing panoramic screen shots until reaching the starting position. Continue the rotation pattern upward until the entire scene in the target digital application has been captured. Additional methods of capturing a set of screenshots to construct a three-dimensional view are also possible, such as changing the percentage of the scene overlap captured in successive screenshots, the type of screenshot, e.g. not panoramic, changing one or more rotation directions, etc.
[0068] In some cases, a scene in a target digital application can include a variety of content that obstructs the scene. For example, many electronic games include a heads-up display (HUD) to display a variety of gameplay statistics, such as available weapons, ammunition, health, score, maps, time, capabilities, game progression, speedometer, compass, etc. The HUD elements can obscure the view of the scene. Therefore, it may be desirable to eliminate any HUD elements from the panoramic screenshots prior to constructing the three-dimensional scene. In some cases, the HUD elements can be removed by disabling the HUD elements prior to capturing the screenshots. The HUD elements can be removed through a post-processing step. Alternatively, the HUD elements can be removed by increasing the amount of overlap between successive screenshots so that when the screenshots are put together, the screenshots are overlapped in such a manner that HUD elements are eliminated. Additionally, some target digital applications can include animations, such as enemies or moving obstacles. Therefore, to create a clear view of the scene it may be necessary to disable and/or freeze any animations prior to capturing the screenshots.
[0069] FIG. 7 illustrates an exemplary view 700 in a three-dimensional scene from a section that uses the active view presentation format. After constructing the three- dimensional scene from the set of screenshots, one or more annotations can also be added to the rendered scene. An annotation can include audio, video, and/or text. A user can mouse over, click on, or otherwise activate the content associated with an annotation. For example, FIG. 7 includes an audio annotation 702. A user can mouse over the audio annotation 702 to reveal an information pop-up window 704. If the user is interested in the content of the audio annotation, the user can click on the audio annotation 702 to activate the audio.
[0070] FIG. 8 illustrates another exemplary view 800 in the three-dimensional scene. In view 800 the scene has been rotated to the right (by the character looking / being- navigated to the left). Additionally, the view 800 includes an information annotation 802. The user can mouse over or click on the annotation 802 to reveal the information box 804.
[0071] A section that uses the active presentation format can include multiple three- dimensional scenes. In some cases, one or more three-dimensional scenes can be connected. For example, a scene in an electronic game can include a point where a user can enter a building, a tunnel, etc. The section can model the electronic games environment through the use of two three-dimensional scenes. In some cases, the three- dimensional scenes can be connected through a transition annotation. For example, FIG. 9 illustrates an exemplary view 900 in a section that uses the active view presentation format. The view 900 includes a transition annotation 902. A user can mouse over the transition annotation 902, to reveal information about the transition, such as pop-up window 904 that provides a hint about the transition destination. To activate the transition to the connected three-dimensional scene, the user can click on the transition annotation. For example, by clicking on the transition annotation 902 in view 900, the user can be taken to the view 1000 in FIG. 10. In some cases, a transition can include an effect, such as an explosion that opens a hole into the next room or three-dimensional scene. Once in the new three-dimensional scene, the user can explore in the same manner as the previous three-dimensional scene.
[0072] In some cases, two three-dimensional scenes in the section may not be directly connected in the target digital application. For example, there may be long tunnel or an entire section of a level in an electronic game. The two three-dimensional scenes may have been selected because they include particularly difficult or interesting features, while the section connecting the two scenes is less interesting. In this case, the transition between the two three-dimensional scenes can be a video. Therefore, the user is still able to visualize the relationship between the three-dimensional scenes, but is not burdened with having to activate multiple transition annotations to find the interesting three- dimensional scenes.
[0073] In some cases, upon entering a section using the active view format a number of hints and/or user controls can be displayed to the user to reveal the available features and/or how the user can interact with a rendered scene and/or the section. The hint window can be disabled after a predefined period so that the user has an unobstructed view of the rendered scene. In some cases, a hint window can be displayed only upon initial entry into a section using the active view format. However, if different scenes in the section require different user actions in order to interact with the scene, then a new hint window can be displayed.
[0074] Additionally, a digital application can display a control bar, such as control bar 1002 so that the user can control any audio, move to another scene, and/or explore the scene. In some cases, a control bar can remain visible. Alternatively, a digital application can be configured such that the control bar is only visible when a mouse pointer is active and/or when a mouse pointer is active in a predefined area, such as in the area of the control bar. Additional techniques for enabling and/or disabling a visual representation of a control bar are also possible.
[0075] Another presentation format can be an effects viewer. A section that uses the effects viewer presentation format can include one or more scenes designed such that a user is able to explore the differences between various digital application settings, hardware types, and/or hardware configurations. A scene can include multiple versions of an image depicting the same scene only with different settings. For example, a first image can be captured on a device with a first graphics card type and a second image can be captured on a device with a second graphics card type. In another example, a first image can be captured with night vision enabled while the second image can be captured with night vision disabled.
[0076] The images can be layered one on top of the other in a scene. The scene can then include functionality that allows a user to view the rendered scene where different layers are revealed for different portions of the rendered scene. For example, the left half of the rendered scene can be displayed using the image captured on a device with a first graphics card while the right half can be displayed using the image captured using the second graphics card. Which layer is revealed for which portion of the rendered scene can be demarcated using one or more slider bars or any other geometric, graphic or organic shapes, and/or any combination thereof. For example, a user can move a slider bar left and right to alter which of portions of two different layers are used in rendering the scene. In another example, a geometric shape can be based on the viewable area as seen through night vision goggles. Additional geometric, graphic, and/or organic shapes are also possible, such as a circle, a thematic shape such as a shield in the case of a medieval game, or peels of an onion. A series of shapes can also be used, such as to look like sections of peel.
[0077] FIG. 11 illustrates an exemplary section 1100 using the effects view presentation format in which a user can explore the difference between two different settings on a rendered scene 1102. The rendered scene can include a slider bar 1104 that a user can move right and left. On the left side of the slider bar 1104 is the rendered scene 1102 as captured with the first settings 1106 while on the right side of the slider bar 1104 is the rendered scene 1102 as captured with the second settings 1108. As the user moves the slider bar 1104 to the right, more of the scene 1102 is rendered using the first settings F06. Likewise, as the user moves the slider bar 1104 to the left, more of the scene 1102 is rendered using the second settings 1108. For example, FIG. 12 illustrates the exemplary section 1100 in which the slider bar 1104 has been moved to the right.
[0078] In some embodiments, a scene configured for use in an effects view section can include more than two images. In some cases, a scene can be rendered with multiple slider bars or other demarcating objects that allow the scene to be rendered as a combination of each of the multiple images. For example, a scene can include three images and two slider bars, thus creating three different sections in the rendered scene. The scene can also be configured such that a user can select two of the settings to compare. In this case, the scene can be rendered using the two images that correspond to the selected settings.
[0079] In some cases, upon entering a section using the effect view format a number of hints and/or user controls can be displayed to the user to reveal the available features and/or how the user can interact with a rendered scene and/or the section. For example, returning to FIG. 11, hint window 1110 informs the user know that the slider bar 1104 can be moved right and left to alter the rendering of the scene. The hint window 1110 can be disabled after a predefined period so that the user has an unobstructed view of the rendered scene. For example, in FIG. 12 the hint window is no longer obstructing the display of the rendered scene 1104. In some cases, a hint window can be displayed only upon initial entry into a section using the effects view format. However, if different scenes in the section require different user actions in order to interact with the scene, then a new hint window can be displayed.
[0080] Additionally, a digital application can display a control bar 1112 so that the user can control any audio, move to another scene, and/or explore the scene. In some cases, a control bar can remain visible. Alternatively, a digital application can be configured such that the control bar is only visible when a mouse pointer is active and/or when a mouse pointer is active in a predefined area, such as in the area of the control bar. Additional techniques for enabling and/or disabling a visual representation of a control bar are also possible.
[0081] Another presentation format can be concept gallery. A section that uses the effects viewer presentation format can include a gallery of images. That is, a section using the concept gallery format can include one or more images from one or more digital applications. For example, a concept gallery section can include one or more images from a single digital application, such as images from different levels in an electronic game. In another example, a concept gallery section can include one or more images from different digital applications, such as a single image from multiple similar digital applications so that a user can see the difference between the applications. Each image can be captured so as to present a particular feature of the digital application. For example, an image can capture a particular point in a game that includes a number interesting and/or challenging features. An image can be a two-dimensional image that includes effects to give the appearance of a three-dimensional image. In some cases, an image can have an associated audio component. The audio component can include narration describing different aspects of image and/or features of the digital application exposed in the image.
[0082] A user can explore an image by panning and scanning across the image. For example, FIG. 13 illustrates an exemplary section 1300 using the concept gallery presentation format in which an image 1302 is displayed. A user's exploration of an image can be independent of any audio narration. That is, if an audio narration is discussing a particular aspect of the image, the user is able to explore a different aspect of the image. In some cases, upon entering a section using the concept gallery format a number of hints and/or user controls can be displayed to the user to reveal the available features and/or how the user can interact with an image and/or the section. For example, hint window 1304 lets the user know that the image can be explored by panning and scanning across the image. The hint window 1304 can be disabled after a predefined period so that the user has an unobstructed view of the image. For example, FIG. 14 illustrates an exemplary section 1300 using the concept gallery presentation format in which the hint window is no longer obstructing the display of image 1302. In some cases, a hint window can be displayed only upon initial entry into a section using the concept gallery format. However, if different images in the section require different user actions in order to interact with the image, then a new hint window can be displayed. Additionally, a digital application can display a control bar 1306 so that the user can control any audio, move to another image, and/or explore the image. In some cases, a control bar can remain visible. Alternatively, a digital application can be configured such that the control bar is only visible when a mouse pointer is active and/or when a mouse pointer is active in a predefined area, such as in the area of the control bar. Additional techniques for enabling and/or disabling a visual representation of a control bar are also possible.
[0083] A user can also explore an image in a section using the concept gallery presentation format by zooming in and out. For example, FIG. 15 illustrates an exemplary section 1300 using the concept gallery presentation format in which a user has zoomed in on an aspect of the image 1302. In some cases, a user can zoom in using a mouse action, such as double clicking, double tapping, pinching, scrolling, etc. Additionally, the control bar can be configured to include a zoom feature, such as zoom control 1502 in control bar 1306. The level of zoom permitted for an image can be based on the image resolution. For example, a digital content application that includes high- resolution images can include greater zoom functionality.
[0084] Another presentation format can be multi-way view. A section that uses the multi-way presentation format can include multiple content items that each illustrates the same feature of a digital application but using a different perspective. For example, the multi-way presentation format can be used to illustrate the relative effectiveness of different gameplay strategies. In this case, each content item can be a video. In each video, a game player can complete a section of a digital application using a particular game play strategy. In some cases, a video can include player commentary.
[0085] FIG. 16 illustrates an exemplary section 1600 that uses the multi-way presentation format. Section 1600 includes three illustrative videos 1602, 1604, and 1606. However, a section using the multi-way presentation format can include any number of content items. A user can select one or more of the content items to discover information about the feature from the chosen perspective. In some embodiments, upon completion of a content item, such as a strategy video, one or more statistics or data points can be revealed. For example, after completing a strategy video for an electronic game, various game play statistics can be displayed, such as time to completion, resources used, achievements accomplished, change in health level, etc. In FIG. 16 none of the content items 1602, 1604, or 1606 have been viewed, and therefore the associated data points 1608, 1610, and 1612 are blank. FIG. 17 illustrates an exemplary set of multi-way content items 1700 after the user has viewed each content item at least once. After the content items have been viewed, the data points are revealed, such as statistics 1702.
[0086] A section configured using the multi-way presentation format can also include a voting feature. The voting feature can allow a user to vote on a content item, such as the content item they found most useful or like the best. For example, a user can vote on the gameplay strategy the user liked best. The voting can be accomplished through vote buttons associated with each content item, such as vote button 1704. At some point a global vote count can be revealed to the user, such as vote count 1706. In some cases, the overall voting results can be hidden from a user until the user casts a vote. Additionally, the overall vote tally can be updated in real time and/or at periodic intervals.
[0087] A digital content application can also include a number of different social features, such as "likes" and comments. These social features can be in addition to those already mentioned above, such as the voting feature in the multi-way presentation format. The digital content application can include one or more comment features. For example, the digital content application as a whole can have a comment feature, and each section in the digital content application can have its own associated comment section. A user can post a comment specific to the section and see comments posted by other users. FIG. 18 illustrates an exemplary comment section 1800 associated with a section in a digital content application.
[0088] In some cases, a comment can be directly linked to content in a section. For example, a user can associate a comment with a particular video segment in a branch video section. Furthermore, a commenting feature can be configured so that a comment is associated with a specific time in a video. Then during video playback, the comments can be highlighted or revealed when the video reaches the point at which the comments were made.
[0089] Each section in the digital content application can also have an associated "likes" feature. Furthermore, individual content within a section can have a "likes" feature. For example, FIG. 19 illustrates an exemplary section 1900 with a user control bar 1902 that includes a "like" button 1904. A user can click the "like" button to indicate that the user likes the particular content. In some cases, the "likes" feature can include a "dislike" button. Additional methods of including user approval or disapproval are also possible.
[0090] Additionally, a digital content application can be configured with various ways to display the social features, such as likes, dislikes, votes, comments, etc. In some cases, the display can be an overall tally. For example, FIG. 20 illustrates an exemplary overall social statistics tally. As described above, a section icon on the opening screen can have an associated pop-up window with information regarding the section. For example, active view section 2002 can have an associated pop-up information window 2004. In addition to a variety of other information about the section, such as user completion rate, the window 2004 can include a social statistics tally 2006. Additional methods of displaying social statistics are also possible.
[0091] In some embodiments, a digital application can include an achievements feature. The achievements feature can be configured such that as a user completes the various sections and/or sub-sections, the user is awarded a bonus. In some cases, a bonus can be unlocking additional content, such as new game strategy hints. A bonus can also be a badge or some other achievement level indicator that can give the user status within the digital content application's social community.
[0092] Embodiments within the scope of the present disclosure may also include tangible and/or non-transitory computer-readable storage media for carrying or having computer- executable instructions or data structures stored thereon. Such non-transitory computer- readable storage media can be any available media that can be accessed by a general purpose or special purpose computer, including the functional design of any special purpose processor as discussed above. By way of example, and not limitation, such non- transitory computer-readable media can include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code means in the form of computer-executable instructions, data structures, or processor chip design. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or combination thereof) to a computer, the computer properly views the connection as a computer-readable medium. Thus, any such connection is properly termed a computer-readable medium. Combinations of the above should also be included within the scope of the computer-readable media.
[0093] Computer-executable instructions include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Computer- executable instructions also include program modules that are executed by computers in stand-alone or network environments. Generally, program modules include routines, programs, components, data structures, objects, and the functions inherent in the design of special-purpose processors, etc. that perform particular tasks or implement particular abstract data types. Computer-executable instructions, associated data structures, and program modules represent examples of the program code means for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps.
[0094] Those of skill in the art will appreciate that other embodiments of the disclosure may be practiced in network computing environments with many types of computer system configurations, including personal computers, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, videogame consoles, network PCs, minicomputers, mainframe computers, and the like. Embodiments may also be practiced in distributed computing environments where tasks are performed by local and remote processing devices that are linked (either by hardwired links, wireless links, or by a combination thereof) through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
[0095] The various embodiments described above are provided by way of illustration only and should not be construed to limit the scope of the disclosure. Those skilled in the art will readily recognize various modifications and changes that may be made to the principles described herein without following the example embodiments and applications illustrated and described herein, and without departing from the spirit and scope of the disclosure.

Claims

CLAIMS We claim:
1. A computer-implemented method comprising:
receiving a user input to change a point of view of a virtual character in a simulated three-dimensional space, the simulated three-dimensional space being comprised of a collection of images taken from screenshots of a three-dimensional scene rendered by a 3-Dimensional graphics rendering engine, each of the collection of images corresponding to a specified point of view of the virtual character;
displaying an image taken from the collection of images that corresponds to a changed point of view of the virtual character;
displaying an interactive scene annotation within the image captured from the collection of images that corresponds to a changed point of view of the virtual character; and
receiving an input effective to interact with the scene annotation.
2. The computer-implemented method of claim 1 , wherein the scene annotation is linked to another simulated three-dimensional space, and when the input effective to interact with the scene annotation is received:
displaying an image taken from a collection of images that correspond to a point of view in a second simulated three-dimensional space.
3. The computer-implemented method of claim 1, wherein the scene annotation provides information about the current view of the simulated three-dimensional space.
4. The computer- implemented method of claim 1, wherein when the input effective to interact with the scene annotation is received the scene annotation plays a media file.
5. The computer- implemented method of claim 1, wherein when the input effective to interact with the scene annotation is received the scene annotation plays a movie, and upon the termination of the movie:
displaying an image taken from a collection of images that correspond to a point of view in a second simulated three-dimensional space.
6. The computer-implemented method of claim 5, wherein the movie is a recording of footage taken from action generated by the 3 -Dimensional graphics rendering engine.
7. The computer-implemented method of claim 1, wherein the three-dimensional scene rendered by a 3 -Dimensional graphics-rendering engine is from a video game.
8. A computer-implemented method comprising:
receiving a collection of static images taken of a three-dimensional scene created by a 3-Dimensional graphics rendering engine;
mapping each image to a coordinate in a virtual three-dimensional space;
editing out scene obscuring content from each of the static images; and anchoring an interactive scene annotation to a coordinate in the virtual three- dimensional space, whereby a reproduced three-dimensional scene is created.
9. The computer-implemented method of claim 1, further comprising:
linking the interactive scene notation to a second reproduced three-dimensional scene.
10. A computer-implemented method comprising:
rendering on a display a first view of a first three-dimensional scene, the first view corresponding to a first angle in the first three-dimensional scene, a three-dimensional scene comprising a plurality of images and at least one annotation, each image associated with a point of origin and an angle, wherein a point of origin corresponds to a location in a scene in a digital application and an angle corresponds to an angle at which the image was captured with respect to the location, and further wherein the plurality of images do not include scene obstructing content;
in response to receiving a request to adjust the first view, rendering a second view of the first three-dimensional scene, the second view corresponding to an adjustment angle specified in the request; in response to receiving a request to activate the at least one annotation, performing a transition when a type of the at least one annotation is determined to be a transition annotation, wherein performing a transition comprises:
displaying a transition content item associated with the content item and upon completion of the transition content item, rendering a first view of a second three- dimensional scene, the first view corresponding to an angle associated with the transition annotation.
11. The method of claim 10, wherein the transition content comprises a video.
12. The method of claim 10, wherein the transition content item comprises an image.
13. The method of claim 11, wherein completion of the transition content item occurs after a specified period of time.
14. The method of claim 10, wherein scene obstructing content includes at least one of HUD elements or active animations.
15. The method of claim 10, wherein the at least one annotation comprises at least one of audio, video, text, or transition.
16. A computer- implemented method comprising:
receiving a plurality of images, each image associated with a point of origin and an angle, wherein a point of origin corresponds to a location in a scene in a digital application and an angle corresponds to an angle at which the image was captured with respect to the location, and further wherein the plurality of images do not include scene obstructing content;
grouping the plurality of images based on a point of origin associated with each image to generate at least a first scene grouping of images and a second scene grouping of images; mapping, via a processor, the images in the at least first scene grouping and the second scene grouping to an inside of a three-dimensional space such as a sphere or cube to generate at least a first three-dimensional scene and a second three-dimensional scene; generating an active view section from the generated at least first three- dimensional scene and the second three-dimensional scene;
adding at least one annotation to a three-dimensional scene in the active view section;
adding a transition between the first three-dimensional scene and the second three- dimensional scene in the active view section, wherein the transition is associated with a first transition annotation in the first three-dimensional scene, a second transition annotation in the second three-dimensional scene, and transition content.
17. The method of claim 16, wherein scene obstructing content includes at least one of HUD elements or active animations.
18. The method of claim 16, wherein the transition content is a video.
19. The method of claim 16, wherein an annotation comprises at least one of audio, video, or text.
20. A computer-implemented method for simulating a three-dimensional scene without use of a 3-D rendering engine comprising:
compiling a collection of images of a three-dimensional scene, wherein the collection of images includes a plurality of overlapping images such that when combined the entire three-dimensional scene is represented in one or more of the overlapping images;
associating each image with a specified point-of-view within the scene;
editing out scene obscuring content from each of the images;
anchoring a scene annotation to a pixel coordinate, the pixel coordinate being located in a first image of the each image associated with a specific point of view; and displaying the first image associated along with the scene annotation anchored to the pixel coordinate within it.
PCT/US2012/048721 2011-07-28 2012-07-27 Interactive digital content applications WO2013016707A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201161512870P 2011-07-28 2011-07-28
US61/512,870 2011-07-28

Publications (1)

Publication Number Publication Date
WO2013016707A1 true WO2013016707A1 (en) 2013-01-31

Family

ID=46614660

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2012/048721 WO2013016707A1 (en) 2011-07-28 2012-07-27 Interactive digital content applications

Country Status (1)

Country Link
WO (1) WO2013016707A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109074404A (en) * 2016-05-12 2018-12-21 三星电子株式会社 For providing the method and apparatus of content navigation
WO2021054852A1 (en) 2019-09-17 2021-03-25 Акционерное общество "Нейротренд" Method for determining the effectiveness of the visual presentation of text-based materials
US11563915B2 (en) 2019-03-11 2023-01-24 JBF Interlude 2009 LTD Media content presentation
US11997413B2 (en) 2019-03-11 2024-05-28 JBF Interlude 2009 LTD Media content presentation

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1103920A2 (en) * 1999-11-25 2001-05-30 Sony Computer Entertainment Inc. Entertainment apparatus, image generation method, and storage medium
WO2008147561A2 (en) * 2007-05-25 2008-12-04 Google Inc. Rendering, viewing and annotating panoramic images, and applications thereof

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1103920A2 (en) * 1999-11-25 2001-05-30 Sony Computer Entertainment Inc. Entertainment apparatus, image generation method, and storage medium
WO2008147561A2 (en) * 2007-05-25 2008-12-04 Google Inc. Rendering, viewing and annotating panoramic images, and applications thereof

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ARTFUNKLE: "Skybox (2D)", 10 July 2011 (2011-07-10), XP002683397, Retrieved from the Internet <URL:https://developer.valvesoftware.com/w/index.php?title=Skybox_%282D%29&oldid=154679> [retrieved on 20120913] *
MILLER G ET AL: "THE VIRTUAL MUSEUM: INTERACTIVE 3D NAVIGATION OF A MULTIMEDIA DATABASE", JOURNAL OF VISUALIZATION AND COMPUTER ANIMATION, XX, XX, vol. 3, no. 3, 1 January 1992 (1992-01-01), pages 183 - 197, XP000925118 *
SHENCHANG ERIC CHEN ED - COOK R: "QUICKTIME VR - AN IMAGE-BASED APPROACH TO VIRTUAL ENVIRONMENT NAVIGATION", COMPUTER GRAPHICS PROCEEDINGS. LOS ANGELES, AUG. 6 - 11, 1995; [COMPUTER GRAPHICS PROCEEDINGS (SIGGRAPH)], NEW YORK, IEEE, US, 6 August 1995 (1995-08-06), pages 29 - 38, XP000546213, ISBN: 978-0-89791-701-8 *
WILSON A ET AL: "A video-based rendering acceleration algorithm for interactive walkthroughs", PROCEEDINGS ACM MULTIMEDIA 2000 WORKSHOPS. MARINA DEL REY, CA, NOV. 4, 2000; [ACM INTERNATIONAL MULTIMEDIA CONFERENCE], NEW YORK, NY : ACM, US, vol. CONF. 8, 30 October 2000 (2000-10-30), pages 75 - 84, XP002175632, ISBN: 978-1-58113-311-0, DOI: 10.1145/354384.354431 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109074404A (en) * 2016-05-12 2018-12-21 三星电子株式会社 For providing the method and apparatus of content navigation
EP3443489A4 (en) * 2016-05-12 2019-04-10 Samsung Electronics Co., Ltd. Method and apparatus for providing content navigation
US10841557B2 (en) 2016-05-12 2020-11-17 Samsung Electronics Co., Ltd. Content navigation
US11563915B2 (en) 2019-03-11 2023-01-24 JBF Interlude 2009 LTD Media content presentation
US11997413B2 (en) 2019-03-11 2024-05-28 JBF Interlude 2009 LTD Media content presentation
WO2021054852A1 (en) 2019-09-17 2021-03-25 Акционерное общество "Нейротренд" Method for determining the effectiveness of the visual presentation of text-based materials

Similar Documents

Publication Publication Date Title
Linowes Unity virtual reality projects
Nebeling et al. The trouble with augmented reality/virtual reality authoring tools
Parisi Learning virtual reality: Developing immersive experiences and applications for desktop, web, and mobile
Herbst et al. TimeWarp: interactive time travel with a mobile mixed reality game
Choi et al. Interactive and immersive learning using 360 virtual reality contents on mobile platforms
US10970843B1 (en) Generating interactive content using a media universe database
US9429912B2 (en) Mixed reality holographic object development
CN103959344B (en) The augmented reality crossing over multiple equipment represents
CN102663799B (en) Method and device for creation of a playable scene with an authoring system
WO2017019296A1 (en) A system for compositing video with interactive, dynamically rendered visual aids
CN107590771A (en) With the 2D videos for the option of projection viewing in 3d space is modeled
Linowes Unity virtual reality projects: Learn virtual reality by developing more than 10 engaging projects with unity 2018
Linowes Unity 2020 virtual reality projects: Learn VR development by building immersive applications and games with Unity 2019.4 and later versions
EP3422148B1 (en) An apparatus and associated methods for display of virtual reality content
CN106462324A (en) A method and system for providing interactivity within a virtual environment
CN103893970B (en) In response to detecting that corresponding token carrys out the system of management role, environment and/or theme
Glover et al. Complete Virtual Reality and Augmented Reality Development with Unity: Leverage the power of Unity and become a pro at creating mixed reality applications
ONG. et al. Beginning windows mixed reality programming
WO2013016707A1 (en) Interactive digital content applications
Mack et al. Unreal Engine 4 virtual reality projects: build immersive, real-world VR applications using UE4, C++, and unreal blueprints
Felicia Getting started with Unity: Learn how to use Unity by creating your very own" Outbreak" survival game while developing your essential skills
Jones et al. What is VR and why use it in research?
Madsen Aspects of user experience in augmented reality
Seligmann Creating a mobile VR interactive tour guide
Ross PND: autobiographical performance for virtual reality film

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12743636

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12743636

Country of ref document: EP

Kind code of ref document: A1