Nothing Special   »   [go: up one dir, main page]

EP1394768B1 - Sound synthesiser - Google Patents

Sound synthesiser Download PDF

Info

Publication number
EP1394768B1
EP1394768B1 EP02256081A EP02256081A EP1394768B1 EP 1394768 B1 EP1394768 B1 EP 1394768B1 EP 02256081 A EP02256081 A EP 02256081A EP 02256081 A EP02256081 A EP 02256081A EP 1394768 B1 EP1394768 B1 EP 1394768B1
Authority
EP
European Patent Office
Prior art keywords
synthesiser
active
sample points
voices
stored
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP02256081A
Other languages
German (de)
French (fr)
Other versions
EP1394768A1 (en
Inventor
Thomas Jacobsson
Andrej Petef
Alberto JIMENEZ FELTSTRÖM
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Telefonaktiebolaget LM Ericsson AB
Original Assignee
Telefonaktiebolaget LM Ericsson AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to ES02256081T priority Critical patent/ES2305182T3/en
Application filed by Telefonaktiebolaget LM Ericsson AB filed Critical Telefonaktiebolaget LM Ericsson AB
Priority to EP02256081A priority patent/EP1394768B1/en
Priority to DE60226435T priority patent/DE60226435D1/en
Priority to AT02256081T priority patent/ATE394771T1/en
Priority to AU2003255418A priority patent/AU2003255418A1/en
Priority to JP2004532083A priority patent/JP4939753B2/en
Priority to CNA03820777XA priority patent/CN1679081A/en
Priority to KR1020057003466A priority patent/KR101011286B1/en
Priority to PCT/EP2003/008902 priority patent/WO2004021331A1/en
Priority to US10/526,522 priority patent/US20060005690A1/en
Publication of EP1394768A1 publication Critical patent/EP1394768A1/en
Application granted granted Critical
Publication of EP1394768B1 publication Critical patent/EP1394768B1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H7/00Instruments in which the tones are synthesised from a data store, e.g. computer organs
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H7/00Instruments in which the tones are synthesised from a data store, e.g. computer organs
    • G10H7/02Instruments in which the tones are synthesised from a data store, e.g. computer organs in which amplitudes at successive sample points of a tone waveform are stored in one or more memories
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2230/00General physical, ergonomic or hardware implementation of electrophonic musical tools or instruments, e.g. shape or architecture
    • G10H2230/025Computing or signal processing architecture features
    • G10H2230/041Processor load management, i.e. adaptation or optimization of computational load or data throughput in computationally intensive musical processes to avoid overload artifacts, e.g. by deliberately suppressing less audible or less relevant tones or decreasing their complexity
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/171Transmission of musical instrument data, control or status information; Transmission, remote access or control of music data for electrophonic musical instruments
    • G10H2240/201Physical layer or hardware aspects of transmission to or from an electrophonic musical instrument, e.g. voltage levels, bit streams, code words or symbols over a physical link connecting network nodes or instruments
    • G10H2240/241Telephone transmission, i.e. using twisted pair telephone lines or any type of telephone network
    • G10H2240/251Mobile telephone transmission, i.e. transmitting, accessing or controlling music data wirelessly via a wireless or mobile telephone receiver, analogue or digital, e.g. DECT, GSM, UMTS
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2250/00Aspects of algorithms or signal processing methods without intrinsic musical character, yet specifically adapted for or used in electrophonic musical processing
    • G10H2250/541Details of musical waveform synthesis, i.e. audio waveshape processing from individual wavetable samples, independently of their origin or of the sound they represent
    • G10H2250/621Waveform interpolation

Definitions

  • This invention relates to sound synthesisers, and more specifically to sound synthesisers used in devices where the computational resources are limited, such as in portable devices.
  • Modern sound synthesisers are required to have a large number of voices.
  • the number of voices a synthesiser has is defined as the number of sounds that can be generated simultaneously.
  • MIDI Musical Instrument Digital Interface
  • a MIDI file does not contain details of specific sounds. Instead, the MIDI file contains a list of events that a device must perform in order to recreate the correct sound. Sampled sounds are stored in the synthesiser and are accessed according to the instructions contained in the MIDI file. Therefore, MIDI files can be much smaller than digital audio files and are suited to an environment where storage memory is limited.
  • GM-1 General MIDI System Level 1
  • a synthesiser is required to have at least 24 voices.
  • Synthesisers such as MIDI synthesisers, that generate sounds from pre-recorded sounds are known as wave-table based synthesisers.
  • wave-table based synthesisers In such a synthesiser, one or several pre-recorded sequences of a musical instrument will be stored in a wave-table. Each sequence will contain a series of samples that are played in order to recreate the sound.
  • a musical instrument can generate a high number of notes, and since sampling and recording every possible note would require a lot of memory, only a few notes are stored.
  • the synthesiser uses one of the stored sequences and a technique known as 'sample rate conversion' to re-sample it and change the frequency and obtain the requested tone.
  • Changing the frequency of the stored sequence is achieved by accessing the stored samples at different rates. That is to say, for example, if the stored samples represent a musical note at a frequency of 300Hz, accessing every sample in turn will reproduce the musical note at 300Hz. If each stored sample is output twice before the next stored sample is read out, the note reproduced by the synthesiser will have a frequency of 150Hz. Similarly, if a note of 600Hz is required then every other stored sample is read out.
  • the synthesiser computes additional samples based on the stored samples. Therefore, in the 150Hz example above, instead of repeating each stored sample twice, the synthesiser will output one stored sample and calculate the next sample on the basis of the surrounding stored samples.
  • the synthesiser requires an interpolation technique.
  • the simplest interpolation technique uses a weighted average of the two surrounding samples. However, this technique is often inaccurate and still results in audible distortions.
  • each voice is implemented using one or several digital signal processors (DSPs) and the computational power of the DSP system imposes a limit on the number of voices that a synthesiser can produce, and also limits the interpolation degree used for each voice.
  • DSPs digital signal processors
  • EP 0750290 describes that the number of waveform samples per unit time, i.e., waveform sample forming resolution, is variably set depending on characteristics of a tone to be generated, such as construction of harmonic components in the tone. The number is increased for a tone or portion (e.g., attack portion) of a tone containing a relatively great number high-order harmonic components. Conversely, for a tone or portion (e.g., sustain portion) of a tone containing fewer high-order harmonic components, the number is decreased.
  • MIDI sound synthesisers it is desirable to be able to implement MIDI sound synthesisers in portable devices, such as mobile phones, to allow the devices to produce polyphonic ring tones and higher quality sounds.
  • GM-1 General MIDI System Level 1
  • the present invention therefore seeks to provide a sound synthesiser that reduces the computational requirements of a synthesiser with a high degree of polyphony, while keeping audible artefacts to a minimum.
  • a synthesiser that comprises a memory, containing a plurality of stored samples; means for calculating an output signal for each of a plurality of active voices, using a plurality of samples selected from the stored samples for each of the active voices; wherein the number of samples used for each active voice by the means for calculating depends upon the number of active voices.
  • each voice is only able to compute one output at a time.
  • the number of samples used for each active voice by the means for calculating decreases as the number of active voices increases.
  • the number of samples used for each active voice by the means for calculating decreases as the number of active voices increases so that a maximum computational complexity is not exceeded.
  • the number of samples used for each active voice by the means for calculating decreases non-linearly as the number of active voices increases.
  • the plurality of samples stored in the memory comprise samples of musical notes.
  • the plurality of samples stored in the memory comprise samples of musical notes produced by different musical instruments.
  • a portable device that comprises a music synthesiser as described above.
  • the portable device is a mobile telephone.
  • the portable device is a pager.
  • Figure 1 shows a music synthesiser in accordance with the invention.
  • the synthesiser comprises a controller 2, a plurality of voices 4, a wave-table memory 6, a filter table 8, a mixer 10 and a digital-to-analogue conversion module 12.
  • the synthesiser is hereinafter described as a wave-table based synthesiser that uses the MIDI protocol, it will be appreciated that the invention is applicable to any wave-table based synthesiser that is required to calculate a sample that lies between two stored samples.
  • sample' used herein refers to a single audio sample point.
  • the total number N of voices 4 in the synthesiser defines the maximum polyphony of the system. As N increases, the polyphony increases allowing a greater number of sounds to be produced simultaneously.
  • N For a MIDI synthesiser conforming to the General MIDI System Level 1 (GM-1), the value of N will be at least 24. For clarity, only three voices are shown in Figure 1 .
  • the controller 2 receives data through an input 14.
  • the data will comprise a stream of MIDI information that relates to a piece of music or specific set of sounds.
  • Each MIDI file will contain a list of events that describe the specific steps that the synthesiser must perform in order to generate the required sounds.
  • a file may, for example, relate to a short piece of music that can be used as a ring-tone.
  • the controller 2 processes the MIDI data stream and directs the appropriate parts of the data to the relevant voices 4 so that the required sound can be synthesised.
  • the required sound may consist of several different instruments playing at once, and therefore each voice 4 will handle one monophonic instrument or one part of a polyphonic instrument at a time.
  • the MIDI file will contain instructions relating to the particular voices 4 that are to be used in synthesising the next output.
  • a different number of voices may be in use at any one time, depending upon the particular piece of music being reproduced.
  • Each voice 4 is connected to the controller 2, the mixer 10, the wave-table memory 6 and the filter table 8.
  • the wave-table memory 6 contains a number of sequences of digital samples. Each sequence may, for example, represent a musical note for a particular musical instrument. Due to restrictions on memory, only a few notes per instrument may be stored.
  • Filter table 8 contains a number of values of a filter.
  • the values represent a sinc function (where a sinc function is (sin (x))/x).
  • both the wave-table memory 6 and the filter table 8 have a multiplexer that allows each table to be accessed more than once per sample period (a sample period is defined as the inverse of the sampling rate, i.e. the rate at which the original sound was sampled). Therefore, each of voices 1 to N can share the same set of resources.
  • a voice 4 based upon the instructions received from the controller 2 and the interpolation degree of the system, produces the required output sample 16.
  • the voice 4 must 'shift' the frequency of the stored sequence to produce a sound at the required frequency.
  • a stored sequence of samples represents a middle C note on a piano
  • this sequence can be shifted in frequency to obtain a C# note or D note.
  • the frequency of the required sound can be expressed as a multiple of the frequency of the stored sequence. This multiple is written as a rational number M/L and is known as the phase increment.
  • phase increment will be equal to 2. If the required frequency is half the frequency of the stored sequence then the phase increment will be equal to 1/2. In the example where a C# note is required, the phase increment will be the twelfth root of 2 (an irrational number) which can be approximated by a rational number.
  • the required samples are not stored in the memory. That is, the required sample falls between two stored samples.
  • the voice 4 retrieves a number of samples surrounding the required sample from the wave-table memory 6 and an equal number of filter coefficients from the filter table 8. Each sample retrieved from the wave-table memory 6 is then multiplied with an appropriate filter coefficient from the filter table 8 and the products combined to produce the output of the voice 16.
  • the coefficients of the filter table 8 are chosen so that, if the wave-table memory 6 does contain the required sample, then the other samples retrieved from the wave-table memory 6 are multiplied by a zero filter coefficient and the stored sample is output.
  • the period of the sinc function is twice the sample period.
  • Each output 16 of a voice 4 is sent to a mixer 10 where the outputs 16 of all active voices 4 are combined into a combined output 18 and passed to the DAC module 12.
  • the DAC module 12 contains one or more digital-to-analogue converters that convert the combined output 18 of the mixer 10 to an analogue signal 20.
  • FIG. 2 shows a method performed by the controller 2 of Figure 1 in accordance with the invention.
  • step 101 the controller 2 analyses the MIDI data stream and determines the number of voices 4 that will be active during the next sample period. That is, the controller 2 determines how many different voices 4 will be contributing outputs 16 to the mixer 10.
  • step 103 the controller 2 determines the number of samples to be used by each voice 4 in calculating the next output 16 (known as the interpolation degree I D ) and instructs the voices 4 appropriately.
  • each active voice 4 calculates an output 16 on the basis of instructions received from the controller 2 using a number of stored samples in the calculation equal to the interpolation degree I D .
  • Each active voice 4 will also use a number of filter coefficients from the filter coefficient table 8 equal to the interpolation degree I D .
  • the process repeats for each output cycle, i.e. the process is repeated once every sample period.
  • the synthesiser has 24 voices 4 and has a maximum interpolation degree of 11.
  • Figure 3 is a table that shows a scheme for determining the interpolation degree based on the number of active voices in accordance with the invention. Specifically, for any given number of active voices, the table gives the interpolation degree to be used.
  • the controller 2 determines that only one voice 4 will be active during the next sample period, the controller 2 instructs the voice 4 to use an interpolation degree of 11.
  • the interpolation degree used in the calculation of the outputs 16 decreases in a linear fashion.
  • the controller 2 determines that an interpolation degree of 4 will be used.
  • the interpolation degree may be chosen such that the maximum computational complexity is not exceeded.
  • Figure 4 is another table that shows such a scheme. Again, the interpolation degree decreases as the number of active voices 4 increases. However, the change is not linear. Instead, the interpolation degree is calculated so that the maximum computational complexity is not exceeded.
  • a synthesiser may require up to 132 MIPS. This computational power far exceeds that available in a typical current portable device such as a mobile terminal.
  • the actual scheme used will be determined by the computational power available to the synthesiser and the amount of computational power required to implement each degree of interpolation.
  • FIG 5 shows a voice of Figure 1 in more detail.
  • the voice 4 is shown with the controller 2, wave-table memory 6 and filter table 8.
  • a processor 22 receives the instructions relevant to the voice 4 from the controller 2.
  • the instructions will comprise the MIDI information relevant to the voice 4 and an indication relating to the interpolation degree to be used in calculating the next output 16.
  • the controller 2 may indicate to each voice 4 the actual interpolation degree that is to be used in calculating the next output, or alternatively, the controller 2 may indicate the number of active voices to each voice 4 and let the processor 22 determine the appropriate interpolation degree.
  • the processor 22 is connected to a phase increment register 24, a counter 26 and a filter coefficient selector 28.
  • the filter coefficient selector 28 is connected to the filter table 8 for retrieving appropriate filter coefficients.
  • the filter coefficient selector 28 is also connected to the counter 26.
  • the processor 22 informs the counter 26 and the filter coefficient selector 28 of the interpolation degree that is to be used for calculating the next output 16.
  • the processor 22 sets the value of the phase increment register 24 for producing the required output 16.
  • the value of the phase increment register 24 will be M/L, where L and M are integers and is determined by the processor 22 on the basis of the instructions received from the controller 2.
  • the phase increment value is passed to an adder 30.
  • the adder 30 is connected to a phase register 32 that records the current phase.
  • the output of the adder 30 comprises an integer part and a fractional part.
  • the integer part of the output of phase register 32 is also passed to a second adder 34 where it is added to the output of the counter 26.
  • the integer output of the adder 34 is connected to the wave-table memory 6 and determines a sample that is to be read out.
  • the samples that are retrieved from the wave-table memory are passed to a multiply-accumulate circuit 36.
  • the fractional part of the phase register 32 output is fed to the filter coefficient selector 28.
  • the output of the filter coefficient selector 28 is passed to the multiply-accumulate circuit 36 where it is combined with the samples retrieved from the wave-table memory 6.
  • the required sample lies between two tabulated samples. Therefore the required sample must be calculated.
  • the adder 30 operates once per sample period to add the phase increment from the phase increment register 24 to the current phase (provided by the phase register 32).
  • the integer part of the phase register 32 output indicates the wave-table memory address that contains the stored sample immediately before the required sample. To calculate the required sample, a number of samples equal to I D are read out from the wave-table memory 6.
  • the counter 26 increments by one each time to select I D samples from around the required sample. Therefore, when I D is 8, four samples before the required sample are read out along with four samples after the required sample. If I D is 5, three samples before the required sample are read out along with two samples after the required sample. Alternatively, two samples before the required sample are read out and three samples after the required sample. These samples are passed to the multiply-accumulate circuit 36.
  • the counter operates from its initial value to its final value once each sample period.
  • the filter coefficient selector 28 obtains appropriate filter coefficients from the filter table 8 depending upon the fractional part of the phase register output and the interpolation degree.
  • the filter coefficient selector 28 is controlled by the counter 26 to obtain I D coefficients from the filter table 8.
  • the input received from the counter 26 is used to pass the filter coefficients to the multiply-accumulate circuit 36.
  • the samples obtained from the wave-table memory 6 are multiplied with the appropriate filter coefficients 44, and the products added to obtain the output for the voice 16.
  • the processor will instruct the counter 26 and filter coefficient selector 28 of the required interpolation degree as appropriate.
  • FIG. 6 shows a mobile phone with a music synthesiser in accordance with the invention.
  • PDA personal digital assistant
  • pagers electronic organisers
  • electronic organisers electronic organisers
  • the mobile phone 46 comprises an antenna 48, transceiver circuitry 50, a CPU 52, a memory 54 and a speaker 56.
  • the mobile phone 46 also comprises a MIDI synthesiser 58 in accordance with the invention.
  • the CPU 52 provides the MIDI synthesiser 58 with MIDI files.
  • the MIDI files may be stored in a memory 54, or may be downloaded from a network via the antenna 48 and transceiver circuitry 50.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Electrophonic Musical Instruments (AREA)
  • Reverberation, Karaoke And Other Acoustics (AREA)
  • Stereophonic System (AREA)
  • Medicines Containing Plant Substances (AREA)
  • Preparation Of Compounds By Using Micro-Organisms (AREA)

Abstract

A sound synthesiser is provided that reduces the computational requirements of a synthesiser with a high degree of polyphony, while ensuring that audible artefacts are kept to a minimum. The synthesiser comprises a plurality of samples stored in a memory; a plurality of voices each comprising means for calculating an output using a plurality of samples selected from the plurality of samples stored in the memory; wherein a voice is active when calculating an output; wherein the number of samples selected by the means for calculating depends upon the number of active voices. <IMAGE>

Description

    TECHNICAL FIELD OF THE INVENTION
  • This invention relates to sound synthesisers, and more specifically to sound synthesisers used in devices where the computational resources are limited, such as in portable devices.
  • BACKGROUND OF THE INVENTION
  • Modern sound synthesisers are required to have a large number of voices. The number of voices a synthesiser has is defined as the number of sounds that can be generated simultaneously.
  • There are several different protocols and standards that define how electronic sound synthesisers reproduce a required set of sounds.
  • One popular way of generating sounds in electronic devices is by using the MIDI (Musical Instrument Digital Interface) protocol. Unlike digital audio files, (such as those found on compact disks) a MIDI file does not contain details of specific sounds. Instead, the MIDI file contains a list of events that a device must perform in order to recreate the correct sound. Sampled sounds are stored in the synthesiser and are accessed according to the instructions contained in the MIDI file. Therefore, MIDI files can be much smaller than digital audio files and are suited to an environment where storage memory is limited.
  • In the General MIDI System Level 1 (GM-1), a synthesiser is required to have at least 24 voices.
  • Synthesisers, such as MIDI synthesisers, that generate sounds from pre-recorded sounds are known as wave-table based synthesisers. In such a synthesiser, one or several pre-recorded sequences of a musical instrument will be stored in a wave-table. Each sequence will contain a series of samples that are played in order to recreate the sound.
  • Often, a musical instrument can generate a high number of notes, and since sampling and recording every possible note would require a lot of memory, only a few notes are stored.
  • Therefore, when a synthesiser is required to produce a note or sound that has a frequency that is different to one stored in the memory, the synthesiser uses one of the stored sequences and a technique known as 'sample rate conversion' to re-sample it and change the frequency and obtain the requested tone.
  • Changing the frequency of the stored sequence is achieved by accessing the stored samples at different rates. That is to say, for example, if the stored samples represent a musical note at a frequency of 300Hz, accessing every sample in turn will reproduce the musical note at 300Hz. If each stored sample is output twice before the next stored sample is read out, the note reproduced by the synthesiser will have a frequency of 150Hz. Similarly, if a note of 600Hz is required then every other stored sample is read out.
  • It is important to note that the rate at which samples are output by the synthesiser remains constant and is equal to one sample period (the time between each stored sample).
  • In the example above, by accessing every sample twice, artefacts (distortions) will be introduced into the output sound. To overcome these distortions, the synthesiser computes additional samples based on the stored samples. Therefore, in the 150Hz example above, instead of repeating each stored sample twice, the synthesiser will output one stored sample and calculate the next sample on the basis of the surrounding stored samples.
  • To do this, the synthesiser requires an interpolation technique.
  • The simplest interpolation technique uses a weighted average of the two surrounding samples. However, this technique is often inaccurate and still results in audible distortions.
  • The optimum interpolation algorithm uses a sin(x)/x function and requires an infinite number of calculations. Of course, this is impractical and therefore sub-optimum algorithms have been developed.
  • One sub-optimum interpolation technique is described in Chapter 8 of "Applications of DSP to Audio and Acoustics" by Dana C. Massie, where several stored samples are used in the calculation (the number of samples used in the interpolation is known as the interpolation degree). The larger the number of samples used in the interpolation, the better the performance of the synthesiser.
  • In a synthesiser, each voice is implemented using one or several digital signal processors (DSPs) and the computational power of the DSP system imposes a limit on the number of voices that a synthesiser can produce, and also limits the interpolation degree used for each voice.
  • When using a sub-optimum interpolation algorithm such as a truncated sin(x)/x algorithm, the computational complexity grows linearly with the interpolation degree.
  • In many commercial synthesisers, an interpolation degree of 10 is often used as this results in a good trade-off between computational complexity and sound quality.
  • EP 0750290 describes that the number of waveform samples per unit time, i.e., waveform sample forming resolution, is variably set depending on characteristics of a tone to be generated, such as construction of harmonic components in the tone. The number is increased for a tone or portion (e.g., attack portion) of a tone containing a relatively great number high-order harmonic components. Conversely, for a tone or portion (e.g., sustain portion) of a tone containing fewer high-order harmonic components, the number is decreased.
  • US 2001/045155 describes that to compress a MIDI format file, it is converted to a format 0 file. The format 0 file is then scanned to reduce the number of possible instruments to 16. MIDI commands having no sound output are then eliminated and NoteOn and NoteOff commands are replaced by a PlayNote command. This eliminates instances of DELTAtime equal to 0. Based on the characteristics of an appliance that has to play the compressed file, the notes are recentered in a bandwidth compatible with the appliance, after which the file is scanned to seek instances of polyphony. If an instance of polyphony exceeds the capacities of the appliance it is reduced in accordance with an instrument priority list and a note pitch.
  • SUMMARY OF THE INVENTION
  • It is desirable to be able to implement MIDI sound synthesisers in portable devices, such as mobile phones, to allow the devices to produce polyphonic ring tones and higher quality sounds.
  • However, the limits placed on computational power in a portable device (such as by cost and available space in the device) are not sufficient to allow the implementation of a sound synthesiser that conforms to the General MIDI System Level 1 (GM-1) (i.e. having 24 voices) and has an interpolation degree of around 10.
  • The present invention therefore seeks to provide a sound synthesiser that reduces the computational requirements of a synthesiser with a high degree of polyphony, while keeping audible artefacts to a minimum.
  • Therefore, according to a first aspect of the present invention there is provided a synthesiser that comprises a memory, containing a plurality of stored samples; means for calculating an output signal for each of a plurality of active voices, using a plurality of samples selected from the stored samples for each of the active voices; wherein the number of samples used for each active voice by the means for calculating depends upon the number of active voices.
  • Preferably, each voice is only able to compute one output at a time.
  • Preferably, the number of samples used for each active voice by the means for calculating decreases as the number of active voices increases.
  • Preferably, the number of samples used for each active voice by the means for calculating decreases as the number of active voices increases so that a maximum computational complexity is not exceeded.
  • Alternatively, the number of samples used for each active voice by the means for calculating decreases non-linearly as the number of active voices increases.
  • Preferably, the plurality of samples stored in the memory comprise samples of musical notes.
  • Preferably, the plurality of samples stored in the memory comprise samples of musical notes produced by different musical instruments.
  • According to a second aspect of the present invention, there is provided a portable device that comprises a music synthesiser as described above.
  • Preferably, the portable device is a mobile telephone.
  • Alternatively, the portable device is a pager.
  • It should be noted that the term "comprises/comprising" when used in this specification is taken to specify the presence of stated features, integers, steps or components but does not preclude the presence or addition of one or more other features, integers, steps, components or groups thereof.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • For a better understanding of the present invention and to show how it may be carried into effect, reference will now be made by way of example to the accompanying drawings, in which:
    • Figure 1 shows a sound synthesiser in accordance with the invention.
    • Figure 2 shows a method performed by the controller of Figure 1 in accordance with the invention.
    • Figure 3 shows a scheme for determining the interpolation degree based on the number of active voices in accordance with the invention.
    • Figure 4 shows an alternative scheme for determining the interpolation degree based on the number of active voices in accordance with the invention.
    • Figure 5 shows a voice of the synthesiser of Figure 1 in more detail.
    • Figure 6 shows a mobile phone with a music synthesiser in accordance with the invention.
    DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • Figure 1 shows a music synthesiser in accordance with the invention. As is conventional, the synthesiser comprises a controller 2, a plurality of voices 4, a wave-table memory 6, a filter table 8, a mixer 10 and a digital-to-analogue conversion module 12.
  • Although the synthesiser is hereinafter described as a wave-table based synthesiser that uses the MIDI protocol, it will be appreciated that the invention is applicable to any wave-table based synthesiser that is required to calculate a sample that lies between two stored samples.
  • It should be noted that the term 'sample' used herein, refers to a single audio sample point.
  • The total number N of voices 4 in the synthesiser defines the maximum polyphony of the system. As N increases, the polyphony increases allowing a greater number of sounds to be produced simultaneously. For a MIDI synthesiser conforming to the General MIDI System Level 1 (GM-1), the value of N will be at least 24. For clarity, only three voices are shown in Figure 1.
  • The controller 2 receives data through an input 14. The data will comprise a stream of MIDI information that relates to a piece of music or specific set of sounds. Each MIDI file will contain a list of events that describe the specific steps that the synthesiser must perform in order to generate the required sounds.
  • In the case of MIDI files stored within portable communication devices, a file may, for example, relate to a short piece of music that can be used as a ring-tone.
  • The controller 2 processes the MIDI data stream and directs the appropriate parts of the data to the relevant voices 4 so that the required sound can be synthesised. For example, the required sound may consist of several different instruments playing at once, and therefore each voice 4 will handle one monophonic instrument or one part of a polyphonic instrument at a time. Often, the MIDI file will contain instructions relating to the particular voices 4 that are to be used in synthesising the next output.
  • Depending upon the particular content of the MIDI file, a different number of voices may be in use at any one time, depending upon the particular piece of music being reproduced.
  • Each voice 4 is connected to the controller 2, the mixer 10, the wave-table memory 6 and the filter table 8.
  • The wave-table memory 6 contains a number of sequences of digital samples. Each sequence may, for example, represent a musical note for a particular musical instrument. Due to restrictions on memory, only a few notes per instrument may be stored.
  • Filter table 8 contains a number of values of a filter. In a preferred embodiment, the values represent a sinc function (where a sinc function is (sin (x))/x).
  • Although not shown in Figure 1, both the wave-table memory 6 and the filter table 8 have a multiplexer that allows each table to be accessed more than once per sample period (a sample period is defined as the inverse of the sampling rate, i.e. the rate at which the original sound was sampled). Therefore, each of voices 1 to N can share the same set of resources.
  • As is conventional, a voice 4, based upon the instructions received from the controller 2 and the interpolation degree of the system, produces the required output sample 16.
  • Often the sound to be produced by a particular voice 4 does not correspond in frequency to one of the stored sequences of samples. Therefore, the voice 4 must 'shift' the frequency of the stored sequence to produce a sound at the required frequency.
  • For example, if a stored sequence of samples represents a middle C note on a piano, then this sequence can be shifted in frequency to obtain a C# note or D note.
  • The frequency of the required sound can be expressed as a multiple of the frequency of the stored sequence. This multiple is written as a rational number M/L and is known as the phase increment.
  • Therefore, if the required frequency is twice the frequency of the stored sequence, then the phase increment will be equal to 2. If the required frequency is half the frequency of the stored sequence then the phase increment will be equal to 1/2. In the example where a C# note is required, the phase increment will be the twelfth root of 2 (an irrational number) which can be approximated by a rational number.
  • Often, when the frequency of a stored sequence of samples is shifted, the required samples are not stored in the memory. That is, the required sample falls between two stored samples.
  • Therefore, the voice 4 retrieves a number of samples surrounding the required sample from the wave-table memory 6 and an equal number of filter coefficients from the filter table 8. Each sample retrieved from the wave-table memory 6 is then multiplied with an appropriate filter coefficient from the filter table 8 and the products combined to produce the output of the voice 16.
  • The coefficients of the filter table 8 are chosen so that, if the wave-table memory 6 does contain the required sample, then the other samples retrieved from the wave-table memory 6 are multiplied by a zero filter coefficient and the stored sample is output.
  • In a preferred embodiment where the filter table 8 contains values that are representative of a sinc function, the period of the sinc function is twice the sample period.
  • Each output 16 of a voice 4 is sent to a mixer 10 where the outputs 16 of all active voices 4 are combined into a combined output 18 and passed to the DAC module 12.
  • The DAC module 12 contains one or more digital-to-analogue converters that convert the combined output 18 of the mixer 10 to an analogue signal 20.
  • Figure 2 shows a method performed by the controller 2 of Figure 1 in accordance with the invention.
  • In step 101, the controller 2 analyses the MIDI data stream and determines the number of voices 4 that will be active during the next sample period. That is, the controller 2 determines how many different voices 4 will be contributing outputs 16 to the mixer 10.
  • In step 103, the controller 2 determines the number of samples to be used by each voice 4 in calculating the next output 16 (known as the interpolation degree ID) and instructs the voices 4 appropriately.
  • In step 105, each active voice 4 calculates an output 16 on the basis of instructions received from the controller 2 using a number of stored samples in the calculation equal to the interpolation degree ID. Each active voice 4 will also use a number of filter coefficients from the filter coefficient table 8 equal to the interpolation degree ID.
  • The process repeats for each output cycle, i.e. the process is repeated once every sample period.
  • In the embodiments of the invention described with reference to Figures 3 and 4, the synthesiser has 24 voices 4 and has a maximum interpolation degree of 11.
  • Figure 3 is a table that shows a scheme for determining the interpolation degree based on the number of active voices in accordance with the invention. Specifically, for any given number of active voices, the table gives the interpolation degree to be used.
  • For example, if the controller 2 determines that only one voice 4 will be active during the next sample period, the controller 2 instructs the voice 4 to use an interpolation degree of 11.
  • As the number of active voices 4 increases, the interpolation degree used in the calculation of the outputs 16 decreases in a linear fashion.
  • If all 24 voices 4 of the synthesiser are active then the controller 2 determines that an interpolation degree of 4 will be used.
  • Alternatively, if a maximum computational complexity is defined for the synthesiser, such as for a synthesiser used in a portable device, the interpolation degree may be chosen such that the maximum computational complexity is not exceeded.
  • Figure 4 is another table that shows such a scheme. Again, the interpolation degree decreases as the number of active voices 4 increases. However, the change is not linear. Instead, the interpolation degree is calculated so that the maximum computational complexity is not exceeded.
  • For example, if a synthesiser has 24 voices, a maximum interpolation degree of 11 and consumes 0.5MIPS/degree/ voice (Millions of Instructions Per Second/degree/ voice) then a conventional synthesiser may require up to 132 MIPS. This computational power far exceeds that available in a typical current portable device such as a mobile terminal.
  • Using the scheme shown in Figure 4, the computational power will not exceed 50 MIPS. This value is more appropriate for a portable device.
  • The actual scheme used will be determined by the computational power available to the synthesiser and the amount of computational power required to implement each degree of interpolation.
  • Figure 5 shows a voice of Figure 1 in more detail. The voice 4 is shown with the controller 2, wave-table memory 6 and filter table 8.
  • A processor 22 receives the instructions relevant to the voice 4 from the controller 2. The instructions will comprise the MIDI information relevant to the voice 4 and an indication relating to the interpolation degree to be used in calculating the next output 16.
  • The controller 2 may indicate to each voice 4 the actual interpolation degree that is to be used in calculating the next output, or alternatively, the controller 2 may indicate the number of active voices to each voice 4 and let the processor 22 determine the appropriate interpolation degree.
  • The processor 22 is connected to a phase increment register 24, a counter 26 and a filter coefficient selector 28.
  • The filter coefficient selector 28 is connected to the filter table 8 for retrieving appropriate filter coefficients.
  • The filter coefficient selector 28 is also connected to the counter 26.
  • In accordance with the invention, the processor 22 informs the counter 26 and the filter coefficient selector 28 of the interpolation degree that is to be used for calculating the next output 16.
  • The processor 22 sets the value of the phase increment register 24 for producing the required output 16. The value of the phase increment register 24 will be M/L, where L and M are integers and is determined by the processor 22 on the basis of the instructions received from the controller 2.
  • The phase increment value is passed to an adder 30. The adder 30 is connected to a phase register 32 that records the current phase. The output of the adder 30 comprises an integer part and a fractional part.
  • Both the integer part and fractional part of the output of the phase register are fed back to the adder 30.
  • The integer part of the output of phase register 32 is also passed to a second adder 34 where it is added to the output of the counter 26. The integer output of the adder 34 is connected to the wave-table memory 6 and determines a sample that is to be read out.
  • The samples that are retrieved from the wave-table memory are passed to a multiply-accumulate circuit 36.
  • In addition to being fed into the adder 30, the fractional part of the phase register 32 output is fed to the filter coefficient selector 28.
  • The output of the filter coefficient selector 28 is passed to the multiply-accumulate circuit 36 where it is combined with the samples retrieved from the wave-table memory 6.
  • The operation of the voice 4 is now briefly described.
  • When the input of the phase register 32 is a non-integer value, i.e. the fractional part is non-zero, the required sample lies between two tabulated samples. Therefore the required sample must be calculated.
  • The adder 30 operates once per sample period to add the phase increment from the phase increment register 24 to the current phase (provided by the phase register 32).
  • The integer part of the phase register 32 output indicates the wave-table memory address that contains the stored sample immediately before the required sample. To calculate the required sample, a number of samples equal to ID are read out from the wave-table memory 6.
  • The counter 26 increments by one each time to select ID samples from around the required sample. Therefore, when ID is 8, four samples before the required sample are read out along with four samples after the required sample. If ID is 5, three samples before the required sample are read out along with two samples after the required sample. Alternatively, two samples before the required sample are read out and three samples after the required sample. These samples are passed to the multiply-accumulate circuit 36.
  • It should be noted that the counter operates from its initial value to its final value once each sample period.
  • The filter coefficient selector 28 obtains appropriate filter coefficients from the filter table 8 depending upon the fractional part of the phase register output and the interpolation degree. The filter coefficient selector 28 is controlled by the counter 26 to obtain ID coefficients from the filter table 8.
  • Once the filter coefficients 44 have been obtained from the filter table 8, the input received from the counter 26 is used to pass the filter coefficients to the multiply-accumulate circuit 36. Here, the samples obtained from the wave-table memory 6 are multiplied with the appropriate filter coefficients 44, and the products added to obtain the output for the voice 16.
  • As the fractional part of the phase register 32 changes, the filter coefficients obtained from the filter table 8 will change.
  • As the number of active voices 4 changes, the processor will instruct the counter 26 and filter coefficient selector 28 of the required interpolation degree as appropriate.
  • Figure 6 shows a mobile phone with a music synthesiser in accordance with the invention. Although the invention is described as being incorporated in a mobile phone, it will be appreciated that the invention is applicable to any portable device such as a personal digital assistant (PDA), pagers, electronic organisers, or any other equipment in which it is desirable to be able to reproduce high quality polyphonic sound.
  • As is conventional, the mobile phone 46 comprises an antenna 48, transceiver circuitry 50, a CPU 52, a memory 54 and a speaker 56.
  • The mobile phone 46 also comprises a MIDI synthesiser 58 in accordance with the invention. The CPU 52 provides the MIDI synthesiser 58 with MIDI files. The MIDI files may be stored in a memory 54, or may be downloaded from a network via the antenna 48 and transceiver circuitry 50.
  • There is thus described a sound synthesiser that reduces the computational requirements of a synthesiser with a high degree of polyphony, while ensuring that audible artefacts are kept to a minimum.

Claims (16)

  1. A synthesiser comprising:
    a memory (6), containing a plurality of stored single audio sample points;
    means (4) for calculating an output signal for each of a plurality of active voices, using a plurality of single audio sample points selected from the stored sample points for each of the active voices;
    wherein the number of sample points used for each active voice by the means (4) for calculating is derived from the number of active voices by means of a look-up table.
  2. A synthesiser as claimed in claim 1, wherein the number of sample points used for each active voice by the means (4) for calculating decreases as the number of active voices increases.
  3. A synthesiser as claimed in claim 2, wherein the number of sample points used for each active voice by the means (4) for calculating decreases as the number of active voices increases so that a maximum computational complexity is not exceeded.
  4. A synthesiser as claimed in claim 1, wherein the number of sample points used for each active voice by the means (4) for calculating decreases non-linearly as the number of active voices increases.
  5. A synthesiser as claimed in one of claims 1 to 4 wherein the plurality of sample points stored in the memory (6) comprise sample points of musical notes.
  6. A synthesiser as claimed in claim 5 wherein the plurality of sample points stored in the memory (6) comprise sample points of musical notes produced by different musical instruments.
  7. A synthesiser as claimed in any preceding claim wherein the means (4) for calculating an output signal comprises a filter table (8).
  8. A synthesiser as claimed in claim 7 wherein the filter table (8) contains coefficients of a sinc function.
  9. A synthesiser as claimed in any preceding claim, wherein the synthesiser is a MIDI music synthesiser.
  10. A portable device, comprising a synthesiser as claimed in any preceding claim.
  11. A portable device as claimed in claim 10 wherein the portable device is a mobile phone.
  12. A portable device as claimed in claim 10 wherein the portable device is a pager.
  13. A method of operating a synthesiser having a plurality of single audio sample points stored in a memory, the method comprising the steps of:
    determining the number of voices that will be active in producing a sound (101);
    determining, an interpolation degree from a look-up table, the interpolation degree being selected on the basis of the number of voices that will be active, wherein the interpolation degree is defined as the number of sample points to be selected from the plurality of sample points stored in the memory (103); and
    calculating an output for each active voice, using the number of said stored sample points determined by the interpolation degree (105).
  14. A method as claimed in claim 13, wherein the interpolation degree decreases as the number of active voices increases.
  15. A method as claimed in claim 13, wherein the interpolation degree decreases as the number of active voices increases so that a maximum computational complexity is not exceeded.
  16. A method as claimed in claim 13, wherein the interpolation degree decreases non-linearly as the number of active voices increases.
EP02256081A 2002-09-02 2002-09-02 Sound synthesiser Expired - Lifetime EP1394768B1 (en)

Priority Applications (10)

Application Number Priority Date Filing Date Title
EP02256081A EP1394768B1 (en) 2002-09-02 2002-09-02 Sound synthesiser
DE60226435T DE60226435D1 (en) 2002-09-02 2002-09-02 Klangsynthesierer
AT02256081T ATE394771T1 (en) 2002-09-02 2002-09-02 SOUND SYNTHESIZER
ES02256081T ES2305182T3 (en) 2002-09-02 2002-09-02 SOUND SYNTHETIZER.
JP2004532083A JP4939753B2 (en) 2002-09-02 2003-08-11 Sound synthesizer
CNA03820777XA CN1679081A (en) 2002-09-02 2003-08-11 Sound synthesizer
AU2003255418A AU2003255418A1 (en) 2002-09-02 2003-08-11 Sound synthesiser
KR1020057003466A KR101011286B1 (en) 2002-09-02 2003-08-11 Sound synthesiser
PCT/EP2003/008902 WO2004021331A1 (en) 2002-09-02 2003-08-11 Sound synthesiser
US10/526,522 US20060005690A1 (en) 2002-09-02 2003-08-11 Sound synthesiser

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP02256081A EP1394768B1 (en) 2002-09-02 2002-09-02 Sound synthesiser

Publications (2)

Publication Number Publication Date
EP1394768A1 EP1394768A1 (en) 2004-03-03
EP1394768B1 true EP1394768B1 (en) 2008-05-07

Family

ID=31197967

Family Applications (1)

Application Number Title Priority Date Filing Date
EP02256081A Expired - Lifetime EP1394768B1 (en) 2002-09-02 2002-09-02 Sound synthesiser

Country Status (5)

Country Link
EP (1) EP1394768B1 (en)
JP (1) JP4939753B2 (en)
AT (1) ATE394771T1 (en)
DE (1) DE60226435D1 (en)
ES (1) ES2305182T3 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7613287B1 (en) 2005-11-15 2009-11-03 TellMe Networks Method and apparatus for providing ringback tones

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2040537B (en) * 1978-12-11 1983-06-15 Microskill Ltd Digital electronic musical instrument
JP3435702B2 (en) * 1992-05-19 2003-08-11 松下電器産業株式会社 Music generator
JP3000894B2 (en) * 1995-06-19 2000-01-17 ヤマハ株式会社 Musical tone generation method
SG42418A1 (en) * 1995-06-19 1997-08-15 Yamaha Corp Method and device for forming a tone waveform by combined use of different waveform sample forming resolutions
JP3267106B2 (en) * 1995-07-05 2002-03-18 ヤマハ株式会社 Musical tone waveform generation method
JP3152156B2 (en) * 1996-09-20 2001-04-03 ヤマハ株式会社 Music sound generation system, music sound generation apparatus and music sound generation method
FR2808370A1 (en) * 2000-04-28 2001-11-02 Cit Alcatel METHOD OF COMPRESSING A MIDI FILE

Also Published As

Publication number Publication date
JP4939753B2 (en) 2012-05-30
JP2005537506A (en) 2005-12-08
ATE394771T1 (en) 2008-05-15
EP1394768A1 (en) 2004-03-03
ES2305182T3 (en) 2008-11-01
DE60226435D1 (en) 2008-06-19

Similar Documents

Publication Publication Date Title
EP0925576B1 (en) Wavetable synthesizer and operating method using a variable sampling rate approximation
US5917917A (en) Reduced-memory reverberation simulator in a sound synthesizer
JP2566697B2 (en) Digital filter for digital music synthesizer
US6096960A (en) Period forcing filter for preprocessing sound samples for usage in a wavetable synthesizer
US20030033338A1 (en) Method for removing aliasing in wave table based synthesisers
KR101011286B1 (en) Sound synthesiser
US5824936A (en) Apparatus and method for approximating an exponential decay in a sound synthesizer
EP1394768B1 (en) Sound synthesiser
EP1493144B1 (en) Generating percussive sounds in embedded devices
JP2779983B2 (en) Electronic musical instrument
US5787023A (en) Digital filter device for electronic musical instruments
US7151215B2 (en) Waveform adjusting system for music file
JP3646854B2 (en) Music synthesizer
JPH10124060A (en) Method and device for musical sound generation and recording medium where program for sound generation is recorded
JP3399340B2 (en) Music synthesis device and recording medium storing music synthesis program
JP2678970B2 (en) Tone generator
JPH02108099A (en) Waveform interpolating device
JP3405170B2 (en) Music synthesizer
JP2007034099A (en) Musical sound synthesizer
JP2566697C (en)
JPH07225582A (en) Device for generating musical sound for electronic instrument
JPH10198381A (en) Music sound creating device
JPH0695677A (en) Musical sound synthesizing device
JP2004309980A (en) Musical sound signal processor
JPH04120596A (en) Musical tone signal generating device

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LI LU MC NL PT SE SK TR

AX Request for extension of the european patent

Extension state: AL LT LV MK RO SI

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)

17P Request for examination filed

Effective date: 20040825

AKX Designation fees paid

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LI LU MC NL PT SE SK TR

17Q First examination report despatched

Effective date: 20061109

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LI LU MC NL PT SE SK TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: FRENCH

REF Corresponds to:

Ref document number: 60226435

Country of ref document: DE

Date of ref document: 20080619

Kind code of ref document: P

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080507

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080507

REG Reference to a national code

Ref country code: ES

Ref legal event code: FG2A

Ref document number: 2305182

Country of ref document: ES

Kind code of ref document: T3

NLV1 Nl: lapsed or annulled due to failure to fulfill the requirements of art. 29p and 29m of the patents act
PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080507

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080507

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20081007

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080807

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080507

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080507

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080507

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20090210

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080807

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080930

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080507

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080902

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080507

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080930

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080930

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080902

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080507

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080507

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080808

REG Reference to a national code

Ref country code: DE

Ref legal event code: R081

Ref document number: 60226435

Country of ref document: DE

Owner name: OPTIS WIRELESS TECHNOLOGY, LLC, PLANO, US

Free format text: FORMER OWNER: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL), STOCKHOLM, SE

Effective date: 20140618

Ref country code: DE

Ref legal event code: R082

Ref document number: 60226435

Country of ref document: DE

Representative=s name: HOFFMANN - EITLE, DE

Effective date: 20140618

Ref country code: DE

Ref legal event code: R082

Ref document number: 60226435

Country of ref document: DE

Representative=s name: GRUENECKER, KINKELDEY, STOCKMAIR & SCHWANHAEUS, DE

Effective date: 20140618

Ref country code: DE

Ref legal event code: R082

Ref document number: 60226435

Country of ref document: DE

Representative=s name: GRUENECKER PATENT- UND RECHTSANWAELTE PARTG MB, DE

Effective date: 20140618

REG Reference to a national code

Ref country code: DE

Ref legal event code: R082

Ref document number: 60226435

Country of ref document: DE

Representative=s name: GRUENECKER, KINKELDEY, STOCKMAIR & SCHWANHAEUS, DE

Ref country code: DE

Ref legal event code: R082

Ref document number: 60226435

Country of ref document: DE

Representative=s name: GRUENECKER PATENT- UND RECHTSANWAELTE PARTG MB, DE

REG Reference to a national code

Ref country code: GB

Ref legal event code: 732E

Free format text: REGISTERED BETWEEN 20150528 AND 20150603

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 14

REG Reference to a national code

Ref country code: FR

Ref legal event code: TP

Owner name: OPTIS WIRELESS TECHNOLOGY, LLC, US

Effective date: 20151223

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 15

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20160825

Year of fee payment: 15

Ref country code: DE

Payment date: 20160823

Year of fee payment: 15

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20160822

Year of fee payment: 15

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: ES

Payment date: 20160824

Year of fee payment: 15

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 60226435

Country of ref document: DE

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20170902

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

Effective date: 20180531

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180404

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170902

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20171002

REG Reference to a national code

Ref country code: ES

Ref legal event code: FD2A

Effective date: 20181024

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170903