WO2004080071A1 - データ処理装置 - Google Patents
データ処理装置 Download PDFInfo
- Publication number
- WO2004080071A1 WO2004080071A1 PCT/JP2004/002678 JP2004002678W WO2004080071A1 WO 2004080071 A1 WO2004080071 A1 WO 2004080071A1 JP 2004002678 W JP2004002678 W JP 2004002678W WO 2004080071 A1 WO2004080071 A1 WO 2004080071A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- audio
- stream
- video
- unit
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 63
- 230000005236 sound signal Effects 0.000 claims abstract description 31
- 230000006835 compression Effects 0.000 claims abstract description 19
- 238000007906 compression Methods 0.000 claims abstract description 19
- 238000000034 method Methods 0.000 claims description 12
- 238000012546 transfer Methods 0.000 claims description 9
- 230000001360 synchronised effect Effects 0.000 claims description 8
- 238000003672 processing method Methods 0.000 claims description 6
- 230000003287 optical effect Effects 0.000 description 31
- 238000010586 diagram Methods 0.000 description 15
- 238000007726 management method Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 10
- 102100037812 Medium-wave-sensitive opsin 1 Human genes 0.000 description 9
- 230000006837 decompression Effects 0.000 description 8
- 238000000354 decomposition reaction Methods 0.000 description 7
- 238000001514 detection method Methods 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 230000002950 deficient Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 241000282693 Cercopithecidae Species 0.000 description 1
- 101000969688 Homo sapiens Macrophage-expressed gene 1 protein Proteins 0.000 description 1
- 102100021285 Macrophage-expressed gene 1 protein Human genes 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 230000008571 general function Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/79—Processing of colour television signals in connection with recording
- H04N9/80—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
- H04N9/804—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components
- H04N9/8042—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components involving data reduction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/84—Television signal recording using optical recording
- H04N5/85—Television signal recording using optical recording on discs or drums
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/79—Processing of colour television signals in connection with recording
- H04N9/80—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
- H04N9/804—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components
- H04N9/806—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components with processing of the sound signal
- H04N9/8063—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components with processing of the sound signal using time division multiplex of the PCM audio and PCM video signals
Definitions
- the present invention describes in real time content including video and audio.
- Various data streams have been standardized that compress and encode video (video) and audio (audio) signals at low bit rates.
- a system stream of the MPEG2 system standard (IS0 / IEC 13818-1) is known.
- the system stream includes three types: a program stream (PS), a transport stream (TS), and a PES stream.
- PS program stream
- TS transport stream
- PES PES stream
- the DVD video recording standard (hereinafter referred to as the “VR standard”) has been defined as a standard that enables real-time recording and editing of content data streams on phase-change optical disks (eg, DVDs).
- the DVD video standard (hereinafter referred to as “video standard”) is specified as a standard for package media that records a data stream of playback-only content such as a movie.
- Figure 1 shows the MPEG2 program stream 1 conforming to the VR standard.
- the data structure is 0 (hereinafter, this stream is referred to as “VR standard stream 10”).
- the VR standard stream 10 includes a plurality of video objects (Video OBjects; VOBs) # 1, # 2, '..., #K.
- VOBs Video OBjects
- VOBs Video OBjects
- #K Video OBjects
- each V ⁇ B is a moving picture generated by a single recording operation from the start of recording by the user to the stop of recording. Data is stored.
- Each VOB includes a plurality of VOB units (Video OBject units; VOBUs) # 1, # 2,..., #N.
- VOB units Video OBject units; VOBUs
- Each VOBU is a data unit mainly including video data in the range of 0.4 second to 1 second in video playback time.
- V ⁇ BU # 1 arranged first in FIG. 1
- VOBU # 2 arranged next thereto as an example.
- V ⁇ BU # 1 is composed of a plurality of packs, which are lower layers of the MPEG program stream.
- the data length (pack length) of each pack in the VR standard stream 10 is constant (2 kilobytes (20 kilobytes). 48 bytes)).
- RDI pack real-time information pack
- a plurality of video packs indicated by "V” video packs 12 and the like
- a plurality of audio packs indicated by "A” audio packs 13 and the like
- each pack is in stores the following information.
- the RDI pack 11 is information used to control the reproduction of the VR standard stream 10, for example, VOBU.
- the video pack 12 stores the video data compressed by the MPEG2.
- the audio pack 13 stores, for example, audio data compressed according to the MPEG2—audio standard.
- the adjacent video packs 12 and audio packs 13 store, for example, video data and audio data that are reproduced in synchronization.
- VOBU # 2 also consists of multiple packs. At the head of VOB U # 2, an RDI pack 14 is arranged, and thereafter, a plurality of video packs 15 and audio packs 16 are arranged. The content of the information stored in each pack is the same as V ⁇ BU # 1.
- FIG. 2 shows the relationship between a video stream composed of video data in a video pack and an audio stream composed of audio data in an audio pack.
- a picture 21 b of a video stream is constituted by video data stored in one or more packs including a video pack 21 a.
- the next picture is constituted by the video data stored in one or more packs including video pack 22, and the next picture is further formed by the video data stored in the subsequent video packs.
- the audio data stored in the audio pack 23a constitutes an audio frame 23b.
- data of one audio frame may be divided into two or more audio packs and stored.
- one audio pack may include a plurality of audio frames.
- V OBU voice frame included in V OBU is completed within V ⁇ BU. That is, it is assumed that the voice frame data included in VOBU all exists in V ⁇ BU, and is not included in the next VOBU.
- Video frames and audio frames are based on information (presentation time stamp; PTS) that specifies the playback time stored in the bucket header of each video pack and audio pack. Will be played.
- PTS presentation time stamp
- the video picture 21b and the audio frame 23b are reproduced at substantially the same time. That is, they are played back in synchronization.
- the last picture 24c of VOBU # i is composed of the video data stored in the video packs 24a to 24b.
- each VOBU is constructed based on the playback time of video, etc., and is not specifically constructed in consideration of audio. Therefore, even if the playback time information (PTS) is added so that the data of the audio frame 25c is played back in synchronization with the video picture 24c, the audio of the next VO BU # (i + 1) is output. Stored in packs 25a and 25b, etc.
- the recording position of the audio frame played back in synchronization with the video frame is shifted because the video target and video pack are multiplexed in the system target decoder (P-STD) that defines the multiplexing rules.
- P-STD system target decoder
- the data size of the data buffer for example, 224 kbytes
- the size of the audio data buffer for example, 4 kbytes. Since the amount of data that can be stored in audio data is small, it is multiplexed so that it is read immediately before the playback timing.
- the user can register a desired VOBU playback order as a “playlist”.
- the playback device uses the specified VOBU based on the playlist.
- the VOBU is obtained and the video is played back. After that, the data is read from the head of the specified VOBU and playback is continued.
- V ⁇ B U # k (k ⁇ (i + 1)) is specified.
- the data in the next V ⁇ BU # k is read. Therefore, the data of the audio frame 25c stored in VOBU # (i + 1), which should be reproduced in synchronization with the video picture 24c, is not read out, and no audio is reproduced. As a result, the user hears the sound cut off halfway.
- V @ BU # k where in the middle of V @ BU # k the audio frame corresponding to the first video picture is stored differs for each VOBU.
- the storage location is determined by the relative relationship between V ⁇ BU #k and the previous VO BU (VOBU # (k-1)). Specifically, the bit size of the program stream and the buffer size of the system target decoder (P-STD) Is determined. Therefore, even if there are all audio frames that should be played back in synchronization with V ⁇ BU #i, audio frames that should be played back in synchronization with VOBU #k are not always stored immediately. . For this reason, the user can hear the sound cut off halfway.
- An object of the present invention is to significantly reduce the period in which audio is interrupted or to eliminate the period in which audio is interrupted even when video and audio are reproduced based on a playlist or the like. Disclosure of the invention
- a data input device to which a video signal and an audio signal are input; a compression unit that compresses and encodes the video signal and the audio signal to generate video data and audio data; Generating a plurality of buckets by dividing the video data and the audio data, generating a plurality of data units in which a video packet related to the video data and an audio packet related to the audio data are multiplexed, A stream assembling unit that generates a data stream including a unit; and a recording unit that records the data stream on a recording medium.
- the stream assembling unit includes at least a video packet and an audio bucket to be included in the data unit.
- the audio data corresponding to the video data stored in a predetermined data unit is determined based on the video playback time. When the whole of the data is not included in the predetermined data unit, among the voice data evening, copy the partial audio data is a portion that does not include at least — Include the copied data in the data stream.
- the stream assembler may store the copy data corresponding to the data unit in a video packet arranged at the beginning of a subsequent data table.
- the stream assembler may store the corresponding copy data in the data unit.
- the stream assembling unit may store the copy data in a dedicated audio stream in the data stream.
- the stream assembling unit may store the copy data in a dedicated private data stream in the data stream.
- the stream assembling unit may store the audio data corresponding to the video data.
- Copy data obtained by copying the entire evening may be included in the predetermined data unit.
- a copy of the entire evening may be stored in a dedicated audio stream in the data stream.
- the stream assembling section stores a copy of the entire audio data synchronized with the video data in a dedicated audio stream in the data stream, and further indicates a transfer timing of the copy data.
- the transfer timing information the transfer timing within the data unit of the copy source is also shifted earlier by a predetermined time.
- the transfer timing may be specified and recorded.
- the stream assembling unit generates the data stream as a first file including the plurality of data units and a second file including the copy data, and the recording unit includes the data unit and the copy data May be continuously recorded on the recording medium.
- the stream assembling unit may generate the second file based on a copy data obtained by copying all of the audio data corresponding to the video data.
- Rate information is added to the audio data, the audio data has a data length corresponding to the rate information, and the compression unit compresses and encodes the audio signal at a first rate to produce the audio data.
- the stream assembler generates data, and sets the value of the second rate earlier than the first rate as the rate information for the audio data included in the predetermined data unit, and And an empty area corresponding to a difference between a second data length defined for the second rate and a first data length of the audio data defined for the first rate.
- the copy data may be stored.
- the data processing method includes: receiving a video signal and an audio signal; compressing and encoding the video signal and the audio signal to generate video data and audio data; And generating a plurality of data units in which a video bucket relating to the video data and an audio bucket relating to the audio data are multiplexed. Generating a data stream including a plurality of the data units; and recording the data stream on a recording medium.
- the step of generating the data stream includes: determining a video bucket and an audio bucket to be included in the data unit based on at least a playback time of the video, and determining the audio bucket corresponding to the video data stored in the predetermined data unit. If the entirety is not included in the predetermined data unit, copy data obtained by copying a partial audio data which is at least a part not included in the audio data is included in the data stream. .
- the copy data corresponding to the data unit may be stored in a video packet arranged first in a subsequent data unit.
- the step of generating the data stream may include, in the predetermined data unit, copy data obtained by copying all of the audio data corresponding to the video data.
- the step of generating the data stream may include generating the data stream based on a first file including the plurality of data units and a second file including the copy data.
- the step of generating the data stream may include generating the second file using copy data obtained by copying all of the audio data corresponding to the video data.
- Rate information is added to the audio data, and the audio data has a data length according to the rate information.
- Generating the audio data by compressing and encoding the audio signal at a first rate, and generating the data stream comprises: Setting the value of the second rate earlier than the first rate as the rate information to generate the audio data, and generating a second data length defined for the second rate;
- the copy data may be stored in a free area corresponding to a difference from the first data length of the audio data defined for the copy data.
- a data stream including a plurality of data units is recorded.
- Each of the plurality of data units is configured by multiplexing a video packet relating to video data and an audio packet relating to the audio data.
- the video data and a part of the audio data corresponding to the video data are stored in a predetermined data unit, and the partial audio data that is another part of the audio data corresponding to the video data is the Not stored in the specified data unit.
- the data stream further includes a copy data obtained by copying the partial audio data.
- a data processing device receives and decodes the above data stream, and outputs a video signal and an audio signal.
- the data processing device includes: a reproduction control unit that instructs to read out data to be reproduced among the data included in the data stream; and the predetermined data unit of the data stream based on the instruction of the reproduction control unit. From the video data and audio data corresponding to the video data A reading unit that reads a part thereof; and a decoding unit that decodes a part of the video data and the audio data and synchronously outputs a video signal and an audio signal.
- the reproduction control unit further instructs the reading of the copy data after the instruction, and the decoding unit decodes the copy data after decoding a part of the audio data and synchronizes with the video signal. Output. BRIEF DESCRIPTION OF THE FIGURES
- FIG. 1 is a diagram showing a data structure of the MPEG2 program stream 10 conforming to the VR standard.
- FIG. 2 is a diagram showing a relationship between a video stream composed of video data in a video pack and an audio stream composed of audio data in an audio pack.
- FIG. 3 is a diagram showing a configuration of a functional block of the data processing device 30.
- FIG. 4 is a diagram showing a data structure of the VR standard stream 10.
- FIG. 5 is a diagram showing a relationship between the VR standard stream 10 and the recording area of the optical disk 13.
- FIG. 6 is a diagram showing a state where the recorded VR standard stream 10 and management information are managed in the file system of the optical disk 13 1.
- FIG. 7 is a diagram showing a relationship between a VOBU according to the first embodiment, and a video stream and an audio stream.
- FIG. 8 is a flowchart showing the procedure of the recording process of the data processing device 30.
- FIG. 9 is a diagram showing a relationship between a VOBU according to the second embodiment, and a video stream and an audio stream.
- FIG. 10 is a diagram showing a relationship between a VOBU according to the third embodiment and a video stream and an audio stream.
- FIG. 11 is a diagram illustrating a relationship between a VOB U according to the fourth embodiment, and a video stream and an audio stream.
- FIG. 12 is a diagram illustrating a relationship between a VOBU according to the fifth embodiment and a video stream and an audio stream.
- FIG. 13 is a diagram showing a relationship between a VOBU according to a modification of the fifth embodiment, and a video stream and an audio stream.
- FIG. 14 is a diagram showing a relationship between a VOBU according to the sixth embodiment, and a video stream and an audio stream.
- FIG. 15 is a diagram showing the data structure of the audio frame of the AC-3 standard and the position and size of the additional information.
- FIGS. 16A and 16B are diagrams showing the data structure of an audio pack having a substream ID according to the type of audio data.
- Fig. 17 shows the data structure of the audio frame of the MPEG-1 audio standard.
- FIG. 3 shows a configuration of a functional block of the data processing device 30.
- the data processing unit 30 is a recording function that records a VR standard stream 10 in real time on a recording medium such as a phase change optical disk 131, such as a DVD-RAM disk or Blu-ray disk (BD). It has. Further, the data processing device 30 also has a reproduction function of reading, decoding, and reproducing the recorded VR standard stream 10. However, in performing the process according to the present invention, the data processing device 30 does not necessarily have to have both the recording function and the reproducing function.
- the data processing device 30 is, for example, a stationary device, a camcorder.
- the data processing device 30 includes a video signal input unit 100, an audio signal input unit 102, an MPEG2 PS encoder 170, a recording unit 120, and a continuous data area detection unit 160. And a recording control unit 161, and a logical block management unit 163.
- the PS assembling section 104 (described later) of the MP EG 2 PS encoder 170 communicates with the video object unit (Video Object Unit; VOB U) as a data unit. Include Video packs and audio packs are determined based on at least the video playback time to generate VOBUs. If the same VOBU does not include all the audio data corresponding to the video, at least copy data obtained by copying at least the audio data that is not included is included in the VR standard stream 10 and recorded.
- “audio corresponding to video” means “audio played back in synchronization with video”.
- the copy data is stored in the subsequent VOBU (for example, the user data area in the first video pack) or in an audio file different from the VR standard stream 10 file.
- the audio data may be stored as a private stream, or may be stored as additional information so that the video and audio that are played back synchronously fit within one VOBU.
- all audio data corresponding to video may be interleaved as different audio streams in the same VOBU.
- the audio file may be stored in a separate audio file from the VR standard stream 10 file.
- all audio data corresponding to video may be stored as a private stream.
- the video signal input unit 100 is a video signal input terminal, and receives a video signal representing video data.
- the audio signal input unit 102 is an audio signal input terminal, and receives an audio signal representing audio data.
- the video signal input unit 100 and the audio signal input unit 102 are connected to a video output unit and an audio output unit of a tuner unit (not shown), respectively. Are connected and receive video and audio signals from each.
- the data processing device 30 is a copy recorder, a camcorder, or the like
- the video signal input unit 100 and the audio signal input unit 102 are connected to a camera CCD (not shown) and a microphone, respectively. Receives output video and audio signals.
- MP EG 2—PS encoder 170 receives video and audio signals, and is compatible with VR standards MP EG 2 program stream (PS), ie, VR The encoder 170 generates a standard stream 10.
- the encoder 170 includes a video compression unit 101, an audio compression unit 103, and a PS assembling unit 104 (the video compression unit 101 and the audio compression unit 101).
- 103 compresses and encodes the video data and audio data obtained from the video signal and the audio signal, respectively, based on the MPEG 2.
- the PS assembling unit 104 compresses and encodes the video data and the audio.
- the data is divided into video packs and audio packs in units of 2 kilobytes each, and these packs are arranged in order to form one VOBU, and the RDI
- the VR standard stream 10 is generated by adding the pack 27.
- FIG. 4 shows the data structure of the VR standard stream 10.
- the VR standard stream 10 includes a plurality of V ⁇ BU. Although FIG. 4 shows two V U BUs, more may be included.
- Each VOBU in the VR standard stream 10 is composed of a plurality of packs. These packs and the information contained in each pack are as described with reference to FIG. 1, and will not be described here.
- the video pack 12 stores MPEG2 compressed video (video) data 12a.
- the video pack 12 includes a pack header 12b and a PES packet header 12c for specifying that the video pack is a video pack.
- the system header (not shown) is also included in the pack header 12b.
- the video data 12a of the video pack 12-1 shown in FIG. 4 constitutes the data of the I frame 44 together with the video data 12d of the subsequent video pack 12-2 and the like. Further, video packs composing the B frame 45 or the P frame following the I frame are continuously recorded.
- the video data 12a includes a sequence header 41, a user data 42, and a GOP header 43.
- a group of pictures is composed of multiple video frames.
- the sequence header 41 indicates the start of a sequence composed of a plurality of GPs.
- the GOP header 43 indicates the beginning of each GOP.
- G ⁇ P top frame —M is an I-frame. Since these headers are well known, a detailed description thereof will be omitted.
- the user data 42 is provided between the sequence header 41 and the GOP header 43, and can describe arbitrary data.
- the playback time of all GOPs in each VOBU is adjusted so that it is within the range of 0.4 seconds or more and 1.0 seconds or less in principle. It is adjusted within the range of 0 seconds or more and 1.0 seconds or less. This is because the VR standard stream 10 is recorded in real time, and recording can be stopped at a timing of less than 0.4 seconds. Within these ranges, fluctuations in video playback time are allowed for each VOBU.
- the recording unit 120 controls the pickup 130 based on the instruction of the recording control unit 161, and the VR standard stream 10 from the position of the logical block number instructed by the recording control unit 161.
- the recording unit 120 divides each ⁇ 8 11 into 32 byte units, adds an error correction code in each unit, and records one logical block on the optical disc 13 1 as one c .
- the recording of one VOBU is completed in the middle of a logical block, the recording of the next VOBU is continuously performed without leaving a gap.
- FIG. 5 shows the relationship between the VR standard stream 10 and the recording area of the optical disc 13 1.
- Each V ⁇ BU of the VR standard stream 10 is recorded in the continuous data area of the optical disc 13 1.
- the continuous data area is composed of physically continuous logical blocks, and data of 17 seconds or more is recorded in this area at the maximum playback time.
- the data processing device 30 adds an error correction code to each logical block.
- the data size of a logical block is 32 kbytes.
- Each logical block contains 16 2K byte sectors.
- the continuous data area detector 160 checks the use status of the sectors of the optical disk 131 managed by the logical block manager 163, and stores unused data corresponding to the above-mentioned time length. Consecutive free logical block areas are detected.
- a continuous free logical block area of 17 seconds or longer is not always detected.
- the block overnight size may be dynamically determined. That is, if a continuous data area of 20 seconds can be secured at a certain point during recording, a continuous data area of 14 seconds can be secured as a continuation to guarantee continuous playback. Good.
- the recording control section 16 1 controls the operation of the recording section 120.
- the recording control unit 16 1 instructs the recording unit 120 to record the VR standard stream 10 as a data file (for example, a file name “VR—MOVIE. VR0”) and causes the optical disc 13 1 to record.
- the recording unit 120 also records the management information file (file name VR—MANGR.IFO) for the VR standard stream received from the recording control unit 161, on the optical disk 131.
- the management information includes, for example, the data size of each VOBU, the number of included video fields, and the data size of the first I frame.
- the more specific control operation of the recording control unit 161 is as followsc : the recording control unit 161 issues an instruction to the continuous data area detection unit 160 in advance, so that a continuous free logical block is generated. and c allowed to detect an area, the recording control unit 1 6 1 notifies the logical block number each time a write logic block unit is generated in the recording unit 1 2 0, logical block becomes spent In this case, the logical block management unit 163 is notified. Note that the recording control unit 161 may cause the continuous data area detection unit 160 to dynamically detect the size of a continuous free logical block area. The continuous data area detection unit 160 re-detects the next continuous data area when the remainder of one continuous data area is converted into the maximum recording / reproducing rate, for example, less than 3 seconds.
- FIG. 6 shows a state in which the recorded VR standard stream 10 and management information are managed in the file system of the optical disk 13 1.
- UDF Universal Disk Format
- ISO / IEC1334 6 Volume and file structure of write-once and rewritable media using non-sequential recording for information interchange
- FIG. 6 a continuously recorded VR standard stream 10 is recorded as a file name VR—M ⁇ VIE.VR ⁇ .
- the management information is recorded as file name VR—MANGR. IFO.
- the file name and the location of the file entry are managed by FID (File Identifier Descriptor).
- FID Frazier Identifier Descriptor
- an Allocation Descriptor in a file entry is used to associate a file with the data areas that make up that file.
- Allocation-The first sector number is set in the descriptor as the position of the file entry that makes up the file.
- the file entry of the VR standard stream file includes an arrangement and a descriptor a to c that manage each of the continuous data areas (CDA). The reason that one file is divided into a plurality of areas a to c is that there were bad logical blocks and unwritable PC files in the middle of area a.
- the file entry of the management information file holds the location descriptor d which refers to the area where the management information is recorded.
- the logical block management unit 163 manages the usage status of each logical block number based on the used logical block number notified from the recording control unit 161. In other words, the usage status of each sector that constitutes the logical block number is used or unused by using the UDF or the space / bit / descriptor area specified by the file configuration of IS0 / IEC 13346. Is recorded and managed. Then, in the final stage of the recording process, the file identifier (FID) and the file 'entry are written to the file management area on the disk.
- FID file identifier
- the UDF standard is equivalent to a subset of the IS0 / IEC 13346 standard.
- a phase change optical disk drive must be connected to a PC via the 1394 interface and the SBP-2 (Serial Bus Protocol-2) protocol. This allows a file written in a UDF-compliant format to be handled as a single file from a PC.
- SBP-2 Serial Bus Protocol-2
- the PS assembling unit 104 has generated a VR standard stream 10 in which all of the corresponding video data and audio data are not included in one VOBU.
- the VOBU is determined based on the video playback time, etc., it can be considered that a part of the audio data is stored in a subsequent VOBU different from the corresponding video data.
- the audio data included in the same VOBU as the video data includes an integer number of audio frames.
- FIG. 7 shows the relationship between a VOBU according to the present embodiment, and a video stream and an audio stream.
- the top row shows the set of VOBUs that make up the VR standard stream 10 set as an MPEG file.
- the second row is a set of video data included in each VOBU
- the third row is a set of video data—evening Represents a set of audio data corresponding to.
- the video data included in VOBU # i is represented as V (i) or the like.
- the top row shows the V ⁇ BUs that make up the MPEG-2 program stream.
- the second row shows the set of video frames stored in each V ⁇ BU.
- the third row shows the storage location of audio data Ao (i) played back in synchronization with each set of video frames and the VOBU boundary.
- the vertical dotted line shows the positional relationship between the two ( Figures 9, 10, 11, 12, 13,
- the storage position of (i) starts in the middle of VOBU # i. The end is stored at the beginning of VOBU (i + 1).
- data A stored from the beginning of -V ⁇ BU # (i + 1) to before audio data A Q (i + 1) is VO BU # i in which video data is stored.
- This audio data is hereinafter referred to as “separated storage data”.
- the PS assembling unit 104 generates copy data representing the same contents as the separated storage data when generating V ⁇ BU # i and V ⁇ BU # (i + 1).
- the copy data is stored after the first video pack of VOBU # (i + 1) following VOBU # i.
- the copy data is stored in the user data area of the first video pack (for example, the user data area 42 in FIG. 4).
- Storing copy data in the user data overnight area 42 means storing all video and audio data in one VR standard stream 10 (one file).
- the copy data means a copy of the audio data itself of the separated storage data.
- the SCR value of the pack header of the audio pack does not need to have any meaning as the transfer timing, so the copy value may be used.
- the PTS value in the PES packet header in the pack can be used as it is.
- the PS assembling unit 104 also generates the audio data A corresponding to the video data V (i + 1) even when V ⁇ BU # (i + 1) and # (i + 2) are generated. Of (i + 1), copy data representing the same contents as the separate storage data stored in VOBU # (i + 2) is generated. Then, the copy data is stored in the first video pack of VOBU # (i + 1) following V ⁇ BU # i.
- the PS assembler 104 determines which picture in the video and which Audio data A because it has the function of adding PTS by knowing whether playback should be performed in sync with the frame. Of which are separated and stored. Therefore, it is easy to specify separated storage data.
- FIG. 8 is a flowchart showing a procedure of a recording process of the data processing device 30.
- the video signal input unit 100 and the audio signal input unit 102 receive a video signal and an audio signal, respectively.
- the video compression section 101 and the audio compression section 103 compression-encode the video data and the audio data obtained from each signal.
- the PS assembling unit 104 generates VOBU # i based on the video playback time and the like.
- the arrangement (order) of each pack such as video packs in VOBU # i is determined in accordance with the rules of the system target decoder model. For example, the arrangement (order) of each pack is determined so as to satisfy the buffer capacity specification specified in the program stream, system, target decoder (P-STD) model.
- step S84 it is determined whether the corresponding video data and audio data are stored in the same VOBU.
- the data of the generated VOBU is sent to the sequential recording unit 120.
- the recording unit 120 records the data on the optical disc 1331.
- the processing from step S83 is repeated.
- the PS assembling unit 104 describes the separated storage data (partial data A in FIG. 7) in the user data overnight area of the first video pack of the next VOBU # (i + 1), and records the data. Output to 120.
- the recording unit 120 records the data on the optical disc 13 1.
- step S86 the PS assembling unit 104 determines whether all the video data and audio data have been processed. If the processing has not been completed, the processing from step S83 is repeated. If the processing has been completed, the recording operation is completed.
- the data processing device 30 includes a video display unit 110, an audio output unit 112, a playback unit 121, a conversion unit 141, an output interface unit 140, and a playback control unit 1622. And a playlist playback control unit 164 and an MPEG2 PS decoder 171.
- the video display unit 110 is a display device such as a television that outputs a video
- the audio output unit 112 is a speaker or the like that outputs video and audio. Note that the video display unit 110 and the audio output unit 112 are not essential elements of the data processing device 30, and may be provided as external devices.
- the playback unit 121 converts the VR standard stream 10 as an analog signal read from the optical disc 13 through the optical pickup 130 based on the instruction of the playback control unit 162 into a digital signal.
- the c- reproduction control unit 162 for reproducing as a signal specifies the V ⁇ BU to be reproduced and the data included in the VOBU, and instructs the optical pickup 130 to read the data.
- the playlist playback control unit 164 plays each scene of the moving image in the order specified by the user. Each scene is managed, for example, on a VOB U basis.
- the MPEG 2-PS decoder 17 1 (hereinafter, referred to as “decoder 17 1”) has a program stream decomposing unit 114, a video decompressing unit 111, and an audio decompressing unit 113.
- the program stream decomposition unit 114 (hereinafter referred to as “PS decomposition unit 114”) separates video data and audio data from the VR standard stream 110.
- the video decompression unit 111 decodes the video data compressed and coded based on the MPEG 2 standard according to that standard and outputs it as a video signal.
- MPEG 1 Decodes audio data that has been compression-encoded based on the standard and outputs it as an audio signal.
- the data processing device 30 When reproducing the VR standard stream 10 recorded by the data processing device 30, the data read from the optical disc 13 1 and the decoding (reproduction) of the read data are performed in parallel. At this time, control is performed so that the data read rate is faster than the maximum playback rate during the night. It operates so that there is no shortage of data to be reproduced. As a result, if the reproduction of the VR standard stream 10 is continued, extra data to be reproduced can be secured per unit time by the rate difference between the maximum data reproduction rate and the data read rate. The data processing device 30 reproduces the extra standard data 10 by playing back the extra data secured during the period when the pickup 130 cannot read the data (for example, during seek operation). It can be realized.
- the data read rate of the playback unit 1 2 1 is 11.0 '8 Mbps
- the maximum data playback rate of the PS decomposition unit 114 is 10.0 8 Mb ps
- the maximum moving time of the pick-up is 1 At 5 seconds
- 15.2 Mbits of extra data is required during the movement of the pickup 130 in order to reproduce the VR standard stream 10 without interruption.
- the reproduction control unit 162 specifies the VOBU to be reproduced and instructs the optical pickup 130 to read data sequentially from the beginning.
- the PS decomposition section 114 separates the VR standard stream 10 reproduced via the pickup 130 and the reproduction section 121 into video data and audio data.
- the video decompression unit 111 and the audio decompression unit 113 decode the video data and the audio data, respectively, display a video based on the obtained video signal on the video display unit 110, and display the video based on the audio signal.
- the audio is output from the audio output unit 112.
- the playlist playback control unit 164 first instructs the optical pickup 130 to read VOBU # i.
- the PS disassembly unit 114 separates the data of the VOB U # i reproduced via the optical pickup 130 and the reproduction unit 121 into video data and audio data, decodes and outputs the data. At this time, if data is described in the user data overnight area of the video pack existing first in VOBU # i, the data is ignored because it is not audio data corresponding to the video of V ⁇ BU # i. .
- the playlist playback control unit 164 reads the data in the user data area of the first video pack of the following VOBU # (i + 1). Instruct the optical pickup 1 to 3. Since this data is separated and stored data relating to the audio corresponding to the video included in V ⁇ BU # i, the audio decompression unit 113 decodes the data after decoding the audio data in V ⁇ BU # i. The stored data is decoded and output as audio. After that, the data of the next playback target VO BU #k is read out based on the instruction from the playlist playback control section 164, and the PS decomposition section 114 reads the next data via the playback section 121. Obtain data of V ⁇ BU # k which is the playback target of, decode and output.
- the RDI pack is placed at the head of the V ⁇ BU, and the video pack is placed next to it, it is easy and quick to read the separated stored data in the first video pack of the following VOBU.
- Separately stored data is recorded over multiple video packs near the beginning of the VOBU.
- the data processor 30 also reads out the separated storage data at the time of reproduction, thereby obtaining all the data of the sound corresponding to the video included in the VOBU, so that the sound is reproduced without interruption.
- the separate storage data in the audio data A o (i) is the user data of the first video pack of VOBU (i + 1). Instead of storing in a private stream, it may be stored and multiplexed in a private stream in V ⁇ BU (i).
- the data processing device 30 can also output the recorded data without going through the above-described stream separation and decoding. That is, the conversion unit 144 converts the read VR standard stream 10 into a predetermined format (for example, the format of the DVD video standard), and the output interface unit 140 outputs the converted stream. At this time, in addition to the data of the VOB U of the VR standard stream 10 to be read, the data of the user data area of the first video pack of the following V ⁇ BU is read, so that the output destination device Also, playback without interruption of audio becomes possible.
- the output interface unit 140 is an interface conforming to, for example, the IEEE 1394 standard, and can control data reading from an external device and data writing from the external device. It is possible.
- the following second and subsequent embodiments are various variations on the recording / reproducing operation of the data processing device 30 of the present embodiment.
- Each component of the data processing device 30 described in the first embodiment has the same function in the following embodiments unless otherwise specified.
- the VR standard stream 10 stores one corresponding video stream and one corresponding audio stream, and that the audio data is not stored in the same VOBU as the video data.
- a copy of the new data was stored in the following V BU video data (in the video pack).
- FIG. 9 shows a relationship between a VOBU according to the present embodiment, and a video stream and an audio stream.
- the VR standard stream 10 is defined as one MPEG file as in the first embodiment, but unlike the first embodiment, two audio streams are multiplexed. Now, let the audio stream corresponding to the video stream be “audio stream # 0”. In audio stream # 0, separate storage data exists.
- the PS assembling unit 104 records a copy of the data of the audio stream # 0 as another audio stream # 1 on the optical disk 131. More specifically, PS assembling section 104 copies the data of audio stream # 0 corresponding to the video included in V ⁇ BU # i, and generates an audio pack of audio stream # 1. Then, those audio packs are multiplexed into VOBU # i of the VR standard stream 10.
- the audio streams # 0 and # 1 can each be identified by the stream ID described in the packet header of each pack.
- the amount of data to be copied is limited by the system stream of the program stream ⁇
- the audio buffer of the target decoder (P-STD). Must be satisfied.
- audio data A that constitutes audio stream # 0. (I), A. (I + 1), A. (I + 2) or the like copied de Isseki a is, (i), A 1 ( i + 1), is stored as A 1 (i + 2) and the like.
- the copy data of Ao (i) cannot always be stored in the VOBU # i.
- the total playback time of the video frame in V ⁇ BU # i and the total transfer time of V ⁇ BU # i data (the difference between the first SCR value of V OBU # i and the first SCR value of V ⁇ BU # i + 1) If the differences are equal, the copy data of A o (i) can be stored exactly.
- the PS assembling unit 104 modifies the SCR and PTS of the MPEG standard attached to the audio pack for the audio stream # 0, and generates the SCR and PTS for the audio stream # 1. In other words, the PS assembling unit 104 converts the values of SCR and PTS attached to the audio pack of audio stream # 1 into the audio stream # 0 when examining the pack storing data representing the same audio. Set a predetermined amount smaller than the SCR and PTS values attached to the packs.
- the pack can be placed at a position in the VR array stream 10 where it is read earlier on the pack array. This is because that. Therefore, the data in VOBU # (i + 1) corresponding to the separated storage data in the first embodiment can be stored more in V ⁇ BU # i.
- the PS assembling unit 104 describes the change amount data indicating the amount by which the SCR and PTS are set to be small, for example, in the user data area 42 of the video pack arranged at the beginning of VOBU # i.
- the playlist playback control unit 164 decodes not the stream # 0 but the stream # 1 in accordance with the decoding of the VOBU # i video recorded on the optical disc 1331. This is because the audio data corresponding to the video data stored in VOBU # i has more data of stream # 1 than stream # 0.
- the PS decomposition section 114 reads the shift amount of the reproduction timing from the user data area 42 of the video pack arranged at the beginning of VOBU # i and adds this value to the PTS, that is, the reproduction time. Delays and plays the sound. This allows video and audio to be synchronized for playback. Monkey
- the difference value between the PTS of the audio frame AF # 0 that is synchronized with the first video frame of VOBU # i and the PTS of the audio frame that includes the copy data of AF # 0, and the video stream file It may be recorded in the management information file for the file “VR—MOV I E. VRO”. Also, the difference value may be recorded in the original data storage area in the RDI pack of each VOBU.
- the playback control unit subtracts the difference value from the evening stamp value of the video frame at the beginning of the VOBU, and outputs the audio included in the audio stream # 1 after the subtraction result. All you have to do is play the frame.
- the shift amount of the playback evening may be recorded in the manufacturer-specific data area in the RDI pack for each VOBU.
- the audio stream # 0 is played back. That is, when a moving image file is played back as a general MPEG file, the audio stream # 0 is used.
- audio stream # 0 within audio stream # 1 A flag indicating that duplicate data is stored may be recorded in the management information file for the video stream file “VR—MOVIE. VRO”. This flag should be recorded at least in VOB units. Also, it may be recorded in the video stream VOB or the audio stream # 1. This flag makes it possible to distinguish whether the audio stream # 1 stores audio different from the audio stream # 0 or a copy of the audio stream # 0.
- the separated storage data is stored in the user data area 42 in the video pack.
- the data processing device 30 records the separated storage data as a file of a file different from the MPEG file that specifies the VR standard stream 10.
- FIG. 10 shows the relationship between the V ⁇ BU and the video stream and the audio stream according to the present embodiment.
- the PS assembling unit 104 specifies the separated storage data related to the VOBU # i, and generates the audio data #i obtained by copying the separated storage data. Then, the PS assembling unit 104 physically and alternately records the audio data and each VOBU constituting the VR standard stream 10. Each audio data and each V ⁇ BU are recorded as one audio file and one MPEG file, respectively.
- the PS assembling section 104 interleaves the audio data #i immediately after V ⁇ BU # i.
- the unit 164 reads not only VOBU #i but also the following audio data #i, and then reads the data of V ⁇ BU # k to be reproduced next. Then, after being separated into video data and audio data in the PS decomposition section 114, the video expansion section 111 and the audio expansion section 113 decode and output the video data and the audio data. In particular, the audio decompression unit 113 decodes and reproduces audio data in the audio pack included in VOBU # i, and then decodes audio data #i included in the audio data file. And play.
- the data processing device 30 Since audio data related to the separated storage data is stored next to the VOB U to be reproduced, continuous reading of the audio data can be easily and quickly realized.
- the data processing device 30 also obtains all audio data corresponding to the video included in V ⁇ BU by reading the separated and stored data during reproduction, so that the audio is reproduced without interruption.
- a copy of the separated storage data is recorded immediately after the corresponding VOB U, but may be recorded immediately before the corresponding VOBU c (Embodiment 4)
- the data processing device generates and records an audio file different from the MPEG file based only on the separated and stored data of the audio stream. Also, for example, the sound related to V ⁇ BU # i Voice data #i was recorded immediately after VOBU # i.
- the data processing device generates and records an audio file different from the MPEG file for all data of the audio stream. Further, audio data related to each V ⁇ BU is recorded before the VOBU.
- FIG. 11 shows a relationship between a VOBU according to the present embodiment, a video stream and an audio stream.
- the PS assembling unit 104 specifies the audio data A Q (i) corresponding to the video data V (i) included in the VOBU at the time of generating V ⁇ BU # i
- the audio data A Generates audio data #i that is a copy of the data that constitutes (i).
- the PS assembling section 104 physically and alternately records the audio data and each VOB U constituting the VR standard stream 10.
- Each audio data and each V ⁇ BU are recorded as one audio file and one MPEG file, respectively.
- the PS assembling unit 104 interleaves the audio data—even #i immediately before VOBU # i.
- the playlist reproduction control unit 164 instructs to read out the audio data #i first before reading out V @ BU # i. Then, before the reading of V ⁇ BU # i ends, the reading of the audio data #i ends, and the decoding by the audio decompression unit 113 ends. Synchronously, all audio can be played. Therefore, even when playback of VOBU # k (k ⁇ (i + 1)) is specified later, seamless playback of voice can be realized. Note that, in the present embodiment, the audio data #i is recorded before the VOBU # i, but the audio data #i may be recorded after the VOBU # i, as in the third embodiment. In this case, it is necessary to read the audio data #i after the reproduction of V ⁇ BU # i and before starting reading of another V ⁇ BU.
- the structure of the data in the audio file is not specifically mentioned, but may be an elementary audio stream or an MPEG2 program stream including the audio stream. It may be an MP4 stream including an audio stream, or another system stream.
- the separated storage data related to VOBU # i is stored in the next V ⁇ BU # (i + 1).
- the separated storage data related to VOBU # i is stored in VOBU # i as another stream.
- FIG. 12 shows a relationship between a VOBU according to the present embodiment, a video stream and an audio stream.
- the PS assembling unit 104 copies the separated storage data A related to VOBU # i, and multiplexes in VOBU # i as a private stream dedicated to the part of the separated storage data A.
- a stream ID is added to identify the video stream and the audio stream included in the stream.
- the stream ID is stored in the PES packet header.
- the stream ID of the video stream is OxE0
- the stream ID of the audio stream is OxCO or OxBD.
- O x BD is a value specified for the private stream in the MP EG-2 system standard.
- the compression code of the audio stream is further identified by one byte immediately after the PES packet header.
- OxBD is used as a stream ID of a private stream newly provided.
- FIG. 13 shows a relationship between a VOB U according to a modification of the present embodiment, and a video stream and an audio stream.
- the PS assembling unit 104 stores the separated storage data A related to V ⁇ BU # i as a private stream in VOBU # i.
- a copy of A is recorded as additional data in the audio frame of VO BU # i.
- FIG. 14 shows a VOBU according to the present embodiment, a video stream and This shows the relationship with the audio stream.
- the PS assembling unit 104 copies and stores the separated storage data A related to the audio stream # 0 of VOBU # i in the additional data (AD) area in the audio frame of VOBU # i.
- FIG. 15 shows a data structure of an AC-3 standard audio frame generated by the audio compression unit 103.
- the audio frame of A C-3 is composed of synchronization information (S I), bit stream information (B S I), audio block (AB n to AB n +5), and additional information (AD).
- S I synchronization information
- B S I bit stream information
- AD additional information
- rate information indicating the bit rate of the audio frame is recorded.
- the bit rate of the audio frame is 448 kbps (the frame size code indicates 448 kbps).
- the audio frame has a data length (1792 bytes shown in Fig. 15) according to the bit rate information specified in the synchronization information (SI).
- the audio compression unit 103 actually records the synchronization information, bit stream information, and the effective data of the audio block at a bit rate of 256 kbps or less, and the auxiliary information area is recorded later. Reserved for separated storage data A.
- the PS assembling unit 104 stores the copy data of the separated storage data A shown in FIG. 14 in the attached information area. Minute The average bit rate of the voice corresponding to the remote storage data A shall be 192 kbps, which is smaller than the difference between 4488 kbps data and 256 kbps. '
- an empty area is provided in each audio frame of the audio stream recorded from the beginning, and the separated storage data is copied to the empty area, so that the audio data not stored in the VOBU (separation) Storage data) can be stored substantially.
- the PS decomposer 114 analyzes the data stream, and the audio decompressor 113 can separate the data that cannot be obtained with the conventional data structure. Copy data of stored data A can be obtained. As a result, even in a video scene where the audio is normally interrupted, the audio can be seamlessly reproduced in synchronization with the video.
- a bit rate half of the bit rate specified in the synchronization information (SI) may be used for the actual bit rate, and the other half may be used for the bit rate of the separated storage data.
- the audio stream of AC-3 may be 448 kbps
- the actual bit stream may be 224 kbps
- the pit stream of the separated storage data may be 224 kbps.
- all the audio data of audio stream # 0 can be stored in the auxiliary information area.
- the compliant voice frame may be in a continuous form, and one voice frame of the separated and stored data A may be composed of two AC-3 standards. It may be recorded in the additional information over the audio frame of the case.
- the data structure of the separated storage data may be an MPEG2 program stream including an audio elementary stream, or may be another system stream.
- the separated storage data A is stored in the additional information (AD) area of the AC-3 standard audio frame.
- the separated storage data A is stored in the additional data (ancillary_data) area in the audio frame of the MPEG-11 audio standard.
- Other configurations are the same as in the sixth embodiment.
- FIG. 17 shows the data structure of an audio frame of the MPEG-1 audio standard in the present embodiment.
- the audio frame of the MPEG-1 audio standard has a header, an error check, audio data, and additional data (ancillary_data).
- the audio compression unit 103 has an audio frame having the data structure shown in Fig. 17 Generate
- the header records information indicating the bit rate, sampling frequency, and layer of the audio frame. In the present embodiment, they are 384 kbps, 48 kHz, and layer 2, respectively. At this time, each audio frame has a data length corresponding to the information of the bit rate specified in the header. However, the audio compression unit 103 Actually, the total of the header, error check, and audio data is recorded so as to be equal to or less than 256 kbps, and the additional data area is left empty for copying the separated storage data A to be recorded later.
- the PS assembling unit 104 stores the copy data of the separated storage data A shown in Fig. 14 in this data area.
- the bit rate of the audio stored as a copy of the separated storage data A is 1 28 kbps or less on average
- an empty area is provided in each audio frame of the audio stream recorded from the beginning, and the separated storage data is copied to the empty area, so that the audio data not stored in the VOBU (separate storage) Data) can be substantially stored.
- the PS decomposing unit 114 analyzes the data stream, so that the audio decompressing unit 113 can be obtained with the conventional data structure. No copy data of separated storage data A can be obtained. As a result, even in a video scene where the audio is normally interrupted, the audio can be reproduced seamlessly in synchronization with the video.
- the audio stream that is a copy of the separated storage data may be a continuous form of audio frames conforming to the MPEG-1 audio standard.
- one audio frame of the separated storage data A may be Two MPEGs may be recorded over the additional data area in the audio frame of the 1 audio standard.
- the data structure of the separated storage data may be an MPEG2 program stream including an audio elementary stream, or may be another system stream.
- the data processing device 30 may be operated so that no special processing is performed during recording, and the separated storage data itself is directly read during reproduction.
- the playlist specifies playback of VOBU # k (k ⁇ (i + 1)) after playback of V ⁇ BU # i
- the playlist playback control unit 164 After reading the data, it is necessary to always read the separated storage data, and then start reading V # BU # k. According to this, redundant recording of separated storage data is not required, and audio can be reproduced seamlessly.
- the MPEG2 standard it is necessary to read a program stream for a maximum of 1 second, which may make seamless playback of video difficult. Therefore, in this case, when generating the program stream, the separated storage data is It is desirable to generate it so that it disappears.
- the video decompression unit 111 sets the video frame size of each VOBU to less than ⁇ video bit rate Z frames per second '' or less. What is necessary is just to generate each frame so that it becomes. As a result, no separate storage data is generated for audio. The reason is that one frame of audio data can be transmitted each time in one frame period. It should be noted that the data size of the I (Intra) frame is limited, and the image quality may be degraded.
- the audio decompression unit 113 may compress and encode the audio data with a restriction that the separated and stored data includes audio data within a predetermined number of frames (for example, 4 frames).
- the VR standard stream which is a program stream
- the transport stream may be in a format conforming to a digital television broadcasting standard using a transport stream.
- the format may be a format based on digital data broadcasting using a transport stream.
- transport stream packets are used. Note that a "pack" is known as one exemplary form of a bucket.
- VR standard stream which is a program stream
- the recording medium is described as a phase change optical disk, for example, a B1 u-ray disk, DVD-RAM, DVD-R, DVD-RW-DVD + RW, M ⁇ , CD-R, CD-RW, etc.
- Other optical disks and recording media in other disk shapes such as hard disks can also be used.
- a semiconductor memory such as a flash memory may be used.
- the read / write head is a pickup for an optical disk. For example, if the recording medium is M0, the read / write head is a pickup and a magnetic head, and if the hard disk is a hard disk, it is a magnetic head.
- MP EG-1 audio or MP EG-2 audio, AA can be generally used as the audio compression method.
- the PS assembling unit 104 provides a substream ID (0x80) in only one byte next to the PES packet header so that it can be identified.
- FIG. 16 (a) shows a data structure of an audio pack having a substream ID (0x80) and including AC-3 data.
- FIG. 16 (b) shows the data structure of an audio pack having a substream ID (0XFF) and including data. This value is a value (O x FF) specified in the DVD-Video standard.
- the separated storage data may be only the elementary stream or the PES packet header may be copied.
- the above description does not mention which VOBU the audio frame at the boundary between the two VOBUs should be played in synchronization with, for example, the audio frames after the PTS of the video frame are in the same VOBU. Just think that it corresponds.
- MPEG-2 video stream is used as video data.
- other compression encoding formats such as an MPEG-4 video stream and an MPEG-4 AVC video stream, may be used.
- copy data obtained by copying at least audio data that is not included at least For example, it is possible to obtain a recording device that records the data at a position that can be easily accessed when accessing the data unit (for example, at the beginning of the next VOBU, immediately before or immediately after the VOBU).
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Television Signal Processing For Recording (AREA)
- Signal Processing For Digital Recording And Reproducing (AREA)
- Management Or Editing Of Information On Record Carriers (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/519,776 US7386553B2 (en) | 2003-03-06 | 2004-03-03 | Data processing device |
JP2005503083A JPWO2004080071A1 (ja) | 2003-03-06 | 2004-03-03 | データ処理装置 |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2003059931 | 2003-03-06 | ||
JP2003-059931 | 2003-03-06 | ||
JP2003118252 | 2003-04-23 | ||
JP2003-118252 | 2003-04-23 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2004080071A1 true WO2004080071A1 (ja) | 2004-09-16 |
Family
ID=32964903
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2004/002678 WO2004080071A1 (ja) | 2003-03-06 | 2004-03-03 | データ処理装置 |
Country Status (3)
Country | Link |
---|---|
US (1) | US7386553B2 (ja) |
JP (1) | JPWO2004080071A1 (ja) |
WO (1) | WO2004080071A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101642112B1 (ko) * | 2015-10-29 | 2016-07-22 | 주식회사 님버스 | 이동통신망에서 실시간 멀티미디어를 송수신하기 위한 모뎀 본딩 시스템 및 방법 |
JP2021531923A (ja) * | 2018-08-06 | 2021-11-25 | アマゾン テクノロジーズ インコーポレイテッド | ネットワークアプリケーションを制御するためのシステムおよびデバイス |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20060127170A (ko) * | 2004-03-03 | 2006-12-11 | 코닌클리즈케 필립스 일렉트로닉스 엔.브이. | 비디오 스트림 처리 회로 및 방법 |
US7624021B2 (en) | 2004-07-02 | 2009-11-24 | Apple Inc. | Universal container for audio data |
JP4270161B2 (ja) * | 2005-04-15 | 2009-05-27 | ソニー株式会社 | 情報記録再生システム、情報記録再生装置及び情報記録再生方法 |
US20090106807A1 (en) * | 2007-10-19 | 2009-04-23 | Hitachi, Ltd. | Video Distribution System for Switching Video Streams |
DE102008044635A1 (de) * | 2008-07-22 | 2010-02-04 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Vorrichtung und Verfahren zum Bereitstellen einer Fernsehsequenz |
BR112013027226A2 (pt) | 2011-04-28 | 2016-12-27 | Panasonic Corp | dispositivo de processamento de vídeo e método de processamento de vídeo |
JP6426901B2 (ja) * | 2014-03-14 | 2018-11-21 | 富士通クライアントコンピューティング株式会社 | 配信方法、再生装置、配信装置、転送制御プログラムおよび配信制御プログラム |
CN110321300A (zh) * | 2019-05-20 | 2019-10-11 | 中国船舶重工集团公司第七一五研究所 | 一种信号处理数据高速记录与回放模块的实现方法 |
US11570396B1 (en) * | 2021-11-24 | 2023-01-31 | Dish Network L.L.C. | Audio trick mode |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002354426A (ja) * | 2001-05-28 | 2002-12-06 | Canon Inc | 記録装置及びその方法 |
JP2003045161A (ja) * | 2001-08-01 | 2003-02-14 | Plannet Associate Co Ltd | デジタル音声映像情報の編集方法 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6172988B1 (en) * | 1996-01-31 | 2001-01-09 | Tiernan Communications, Inc. | Method for universal messaging and multiplexing of video, audio, and data streams |
CN1253017C (zh) | 1997-12-15 | 2006-04-19 | 松下电器产业株式会社 | 用于把视频目标记录在光盘上的记录设备及其方法 |
US7558472B2 (en) * | 2000-08-22 | 2009-07-07 | Tivo Inc. | Multimedia signal processing system |
GB9911989D0 (en) * | 1999-05-25 | 1999-07-21 | Pace Micro Tech Plc | Data transport strems processing |
US6272286B1 (en) | 1999-07-09 | 2001-08-07 | Matsushita Electric Industrial Co., Ltd. | Optical disc, a recorder, a player, a recording method, and a reproducing method that are all used for the optical disc |
GB9930788D0 (en) * | 1999-12-30 | 2000-02-16 | Koninkl Philips Electronics Nv | Method and apparatus for converting data streams |
US20020197058A1 (en) | 2001-05-28 | 2002-12-26 | Koichiro Suzuki | Recording apparatus |
-
2004
- 2004-03-03 WO PCT/JP2004/002678 patent/WO2004080071A1/ja active Application Filing
- 2004-03-03 US US10/519,776 patent/US7386553B2/en not_active Expired - Fee Related
- 2004-03-03 JP JP2005503083A patent/JPWO2004080071A1/ja active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002354426A (ja) * | 2001-05-28 | 2002-12-06 | Canon Inc | 記録装置及びその方法 |
JP2003045161A (ja) * | 2001-08-01 | 2003-02-14 | Plannet Associate Co Ltd | デジタル音声映像情報の編集方法 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101642112B1 (ko) * | 2015-10-29 | 2016-07-22 | 주식회사 님버스 | 이동통신망에서 실시간 멀티미디어를 송수신하기 위한 모뎀 본딩 시스템 및 방법 |
JP2021531923A (ja) * | 2018-08-06 | 2021-11-25 | アマゾン テクノロジーズ インコーポレイテッド | ネットワークアプリケーションを制御するためのシステムおよびデバイス |
Also Published As
Publication number | Publication date |
---|---|
JPWO2004080071A1 (ja) | 2006-06-08 |
US20060165387A1 (en) | 2006-07-27 |
US7386553B2 (en) | 2008-06-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP0924704B1 (en) | Optical disc recording apparatus, and optical disc recording method for facilitating dubbing, storage medium for storing optical disc recording program for facilitating dubbing | |
CA2439467C (en) | A method and an apparatus for stream conversion, a method and an apparatus for data recording, and data recording medium | |
JP4299836B2 (ja) | データ処理装置 | |
US6782193B1 (en) | Optical disc recording apparatus, optical disc reproduction apparatus, and optical disc recording method that are all suitable for seamless reproduction | |
JP2004118986A (ja) | 情報記録装置および方法 | |
JPWO2005015907A1 (ja) | データ処理装置 | |
WO2004080071A1 (ja) | データ処理装置 | |
JP2003274367A (ja) | 再生装置 | |
JP2006121235A (ja) | デジタルストリーム信号の情報媒体、記録方法、再生方法、記録装置および再生装置 | |
JPWO2002080542A1 (ja) | Avデータ記録再生装置及び方法、並びに当該avデータ記録再生装置又は方法で記録された記録媒体 | |
KR100625406B1 (ko) | 데이터 처리 장치 | |
JP4481929B2 (ja) | データストリームの記録方法および装置 | |
WO2004030358A1 (ja) | データ処理装置 | |
KR100987767B1 (ko) | 정지 영상이 기록된 정보 저장 매체, 그 재생 장치 및 방법 | |
US20040076406A1 (en) | Information recording apparatus and method | |
JP2003174622A (ja) | 音声/映像情報記録再生装置および方法、および音声/映像情報記録再生装置および方法を用いて情報が記録された記録媒体 | |
EP1457990A1 (en) | Audio/video information recording/reproducing apparatus and method, and recording medium in which information is recorded by using the audio/video information recording/reproducing apparatus and method | |
JP2003132628A (ja) | 情報記録再生装置 | |
JP2004355806A (ja) | 情報記録再生装置 | |
JP2006121213A (ja) | データ変換装置、データ変換方法、データ変換プログラム及びプログラム記録媒体 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2005503083 Country of ref document: JP |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 20048003767 Country of ref document: CN |
|
ENP | Entry into the national phase |
Ref document number: 2006165387 Country of ref document: US Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 10519776 Country of ref document: US |
|
122 | Ep: pct application non-entry in european phase | ||
WWP | Wipo information: published in national office |
Ref document number: 10519776 Country of ref document: US |