Nothing Special   »   [go: up one dir, main page]

EP3766099A1 - Event-based vision sensor manufactured with 3d-ic technology - Google Patents

Event-based vision sensor manufactured with 3d-ic technology

Info

Publication number
EP3766099A1
EP3766099A1 EP19716563.2A EP19716563A EP3766099A1 EP 3766099 A1 EP3766099 A1 EP 3766099A1 EP 19716563 A EP19716563 A EP 19716563A EP 3766099 A1 EP3766099 A1 EP 3766099A1
Authority
EP
European Patent Office
Prior art keywords
pixel
die
wafer
sensor
circuit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP19716563.2A
Other languages
German (de)
French (fr)
Inventor
Raphael BERNER
Christian BRÄNDLI
Massimo ZANNONI
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Advanced Visual Sensing AG
Original Assignee
Sony Advanced Visual Sensing AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Advanced Visual Sensing AG filed Critical Sony Advanced Visual Sensing AG
Publication of EP3766099A1 publication Critical patent/EP3766099A1/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14601Structural or functional details thereof
    • H01L27/14609Pixel-elements with integrated switching, control, storage or amplification elements
    • H01L27/14612Pixel-elements with integrated switching, control, storage or amplification elements involving a transistor
    • H01L27/14614Pixel-elements with integrated switching, control, storage or amplification elements involving a transistor having a special gate structure
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14601Structural or functional details thereof
    • H01L27/14634Assemblies, i.e. Hybrid structures
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14601Structural or functional details thereof
    • H01L27/14636Interconnect structures
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14683Processes or apparatus peculiar to the manufacture or treatment of these devices or parts thereof
    • H01L27/1469Assemblies, i.e. hybrid integration
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L29/00Semiconductor devices specially adapted for rectifying, amplifying, oscillating or switching and having potential barriers; Capacitors or resistors having potential barriers, e.g. a PN-junction depletion layer or carrier concentration layer; Details of semiconductor bodies or of electrodes thereof ; Multistep manufacturing processes therefor
    • H01L29/40Electrodes ; Multistep manufacturing processes therefor
    • H01L29/41Electrodes ; Multistep manufacturing processes therefor characterised by their shape, relative sizes or dispositions
    • H01L29/423Electrodes ; Multistep manufacturing processes therefor characterised by their shape, relative sizes or dispositions not carrying the current to be rectified, amplified or switched
    • H01L29/42312Gate electrodes for field effect devices
    • H01L29/42316Gate electrodes for field effect devices for field-effect transistors
    • H01L29/4232Gate electrodes for field effect devices for field-effect transistors with insulated gate
    • H01L29/42364Gate electrodes for field effect devices for field-effect transistors with insulated gate characterised by the insulating layer, e.g. thickness or uniformity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/47Image sensors with pixel address output; Event-driven image sensors; Selection of pixels to be read out based on image data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/76Addressed sensors, e.g. MOS or CMOS sensors
    • H04N25/77Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/79Arrangements of circuitry being divided between different or multiple substrates, chips or circuit boards, e.g. stacked image sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/86Event-based monitoring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/51Control of the gain

Definitions

  • DVS Dynamic Vision Sensor
  • QE quantum efficiency
  • FF fill-factor
  • the presented invention has the main purpose of mitigating these two issues by using in the fabrication of an event-based vision sensor an advanced stacking technique, known as Three-Dimensional Integrated Circuit, which stacks more wafers (or dies) and interconnects them vertically.
  • an advanced stacking technique known as Three-Dimensional Integrated Circuit
  • the invention features an Event-Based Vision Sensor (EBVS) including stacked dies that are connected vertically.
  • EBVS Event-Based Vision Sensor
  • connection for every pixel of the pixel array between the dies there is at least one connection for every pixel of the pixel array between the dies.
  • photodiodes of each pixel of the pixel array are in a first die and respective event detectors of each pixel of the pixel array in a second die and
  • interconnections between the first and the die connect the photodiodes to respective event detectors.
  • each pixel of the pixel array can be implemented.
  • it can be located on the second die, or in the first die or distributed between the first die and the second die.
  • n-FET transistors are used in the first wafer or die, and both n-FET and p- FET transistors are used in the second die.
  • the transistor properties between the transistors on the first die and the second die can be different including different gate oxide thicknesses or different implants.
  • the invention features a method for fabricating an Event-Based Vision Sensor.
  • this method comprises fabricating different devices of each pixel of the pixel array in different wafers or dies and then stacking the wafers or dies.
  • a "die” is a piece or a portion of a semiconductor wafer, typically in a rectangular shape, such as a chip.
  • this piece of semiconductor wafer includes a portion of an instance of an integrated circuit device, such as the Event-Based Vision Sensor.
  • the reference to wafer or die is based on the potential for differed fabrication approaches.
  • the stacking can be performed at the wafer- level before dicing into dies. Or, the stacking can be performed on individual dies, after they have been cut away or diced from the wafer. As a result, the final device resulting from the fabrication process with be a stack of dies.
  • the method would then include connecting each of the pixels using Cu-Cu connections, for example.
  • the method further involves fabricating photodiodes of each pixel of the pixel array in a first wafer or die and fabricating respective event detectors of each pixel of the pixel array in a second wafer or die.
  • FIG. 1 Circuit diagram showing a state of the art (SO A) pixel implementation for an event-based image sensor, e.g. according to PCT/IB2017/058526 and U.S. Pub. No. 2018/0191972, for example;
  • SO A state of the art
  • FIGs. 2A-2C A SOA event-based image sensor: single wafer (partial vertical cross-section in Fig. 2A and Fig. 2B, partial top view in Fig. 2C) that implements the sensor;
  • Fig. 2A refers to a front-side illumination (FSI) application;
  • Fig. 2B refers to a back-side illumination (BSI) application;
  • FSI front-side illumination
  • BSI back-side illumination
  • FIGs. 3A-3D A partial vertical cross-section of two (2) stacked wafers that show a preferred embodiment before dicing the wafers (Fig. 3A); partial vertical cross- section focused on one pixel, showing a back-side illuminated (BSI) top wafer with only the photo-diode (PD) and a single connection per pixel to the bottom wafer (Fig. 3B); a block diagram (Fig. 3C) and a circuit diagram (Fig. 3D) showing the details of the pixel frontend circuit and how it is arranged between the wafers/dies;
  • BSI back-side illuminated
  • PD photo-diode
  • Figs. 5A-5D A partial vertical cross-section of two (2) stacked wafers that shows the entire frontend in the top wafer, eventually including a source-follower stage (Fig. 5A); partial vertical cross-section focused on one pixel showing how the output from the first wafer can then directly connect to one of the plates of the MIM (metal-insulator- metal) capacitor of the event detector (which is located between the two topmost metals) on bottom wafer, this is shown in a simplified vertical section that includes details on the silicon process layers for the two stacked wafers (Figs. 5B); two circuit diagrams showing the details of the pixel frontend circuit also depicted (Figs. 5C and 5D);
  • FIGs . 6A-6B A circuit diagram showing an alternative pixel frontend circuit, with an additional p-FET that improves performance (Fig. 6A); a circuit diagram showing the separation between the wafers/dies (Fig. 6B);
  • Figs. 7A-7C Three circuit diagrams showing variations of the pixel frontend circuit, in which multiple Cu-Cu connections in every pixel are needed; Fig. 7A is preferred; in Figs. 7B and 7C it is shown how the elements of the circuit could be arranged between the wafers/dies such that the top wafer does not contain p-FET devices, showing how in this case more than two cross-wafer (or -die) connection per pixel are required.
  • the term “and/or” includes any and all combinations of one or more of the associated listed items. Further, the singular forms and the articles “a”, “an” and “the” are intended to include the plural forms as well, unless expressly stated otherwise. It will be further understood that the terms: includes, comprises, including and/or comprising, when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements,
  • An Event-Based Pixel Array is an array of pixels containing photo sensitive devices; these pixels, spatially and/or temporally independent from each other, generate discretized data as function of the light radiation that they receive.
  • An Event-based Vision Sensor is a sensor that outputs data extracted and/or elaborated from an EBPA.
  • the Fill Factor is defined as the ratio between the area of a pixel and the area of the photo-sensitive device presented in that pixel. It is a measure of how much of the total light radiation that hits the surface of a sensor can be effectively captured by the sensor.
  • Quantum Efficiency is defined as the ratio between the number of photons that hit the surface of a photo-sensitive sensor and the number of electrons that are generated in response and transformed into an electrical signal.
  • 3D IC acronym for Three-Dimensional Integrated Circuit; it is a technique to manufacture an integrated circuit by stacking silicon wafers or dies and interconnecting them vertically.
  • Front-Side Illumination (FSI) type of image sensor that is realized as an integrated circuit (IC), such that it is illuminated from the top of the die, which is the side onto which the layers of the planar process are realized; all the devices and the metal routing, together with the photo- sensitive devices, receive direct light radiation.
  • BBI Back-Side Illumination
  • Event-Based Vision Sensor can be found, for examples, in PCT/IB2017/058526 or US7728269B2 or U.S. Pub. No. 2018/0191972.
  • FIG. 1 An example of pixel architecture of an EBPA of an EBVS, which will be used as reference in this document, is shown in Fig. 1. It is taken from PCT/IB2017/058526 and U.S. Pub. No. 2018/0191972, which is incorporated herein by this reference in its entirety.
  • the core concepts in the proposed invention can be applied to virtually any Event-Based Vision Sensor realized as IC, not depending on any specific pixel architecture used.
  • the pixel circuit 100 contains a photodiode PD, or other photosensor, to measure impinging light and convert the light intensity to current Iphoto; a photoreceptor circuit PRC to generate a photoreceptor signal Vpr dependent on the light intensity; and a memory capacitor Cl to remember past photoreceptor signal.
  • the photosensor PD and photoreceptor circuit PRC constitute the photoreceptor module PR.
  • Memory capacitor C 1 Receives the photoreceptor signal Vpr such that first plate of the capacitor carries a charge that is responsive to the photoreceptor signal Vpr and thus the light received by the photosensor PD and is part of the event detector ED. A second plate of the memory capacitor Cl is connected to the comparator node (inverting input) of Al. Thus the voltage of the comparator node, Vdiff, varies with changes in the photoreceptor signal Vpr.
  • Comparator Al This is a means to compare the difference between current photoreceptor signal Vpr and past photoreceptor signal to a threshold and is part of the event detector ED.
  • This comparator Al can be in each pixel, or shared between a subset (for example a column) of pixels. In the preferred embodiment the comparator will be integral to the pixel, with each pixel having a dedicated comparator Al.
  • Memory 50 stores the comparator output based on a sample signal from the controller 60 and is part of the event detector ED.
  • Memory can be a sampling circuit (for example a switch and a parasitic or explicit capacitor) or a digital memory circuit (a latch or a flip-flop).
  • the memory will be a sampling circuit and each pixel will have two memories.
  • a conditional reset circuit Rl Condition for reset is a combination of the state of the memorized comparator output and a reset signal applied by the controller 60 and is part of the event detector ED.
  • Peripheral circuit components The comparator Al and the memory 50 can be located in the pixel or in peripheral circuits (outside the pixel circuit).
  • the peripheral circuits contain a controller 60 which applies threshold signals to the comparator Al, sends control signals to memory 50 and selects times when the conditional reset circuit Rl becomes active.
  • the peripheral circuits may also contain a readout circuit, which reads the content of the memory 50, determines if the light intensity for a given pixel has increased, decreased, or unchanged, and sends the output (computed from the current memory value) to a processor.
  • the comparator tells if light has increased and/or decreased. For Off event: if Vdiff is lower than the threshold Voff (on Vb), the comparator output is high, and this level is stored in the memory. This means a decrease is detected. If Vdiff is not lower than the threshold, the comparator output is low: no decrease detected.
  • the pixel circuit 100 and controller 60 operate as follows.
  • a change in light intensity received by the photosensor PD will translate to a change in photoreceptor signal Vpr.
  • the reset circuit Rl When the reset circuit Rl is not conducting, the changes in Vpr will be reflected also in the voltage Vdiff at a comparator node at the inverting input (-) to the comparator Al. This occurs because the voltage across the memory capacitor Cl stays constant.
  • the comparator Al compares the voltage at the comparator node at the second terminal of the memory capacitor Cl (Vdiff) to a threshold voltage Vb (from controller) applied to the non-inverting input (+) of the comparator Al.
  • the controller 60 operates the memory 50 to store the comparator output Vcomp.
  • the memory 50 is typically implemented as part of the pixel circuit 100 as shown. In other embodiments, however, the memory 50 is implemented as part of column logic circuit (peripheral circuit, one per each column of the pixel array).
  • conditional reset circuit Rl If the state of the stored comparator output held in the memory 50 indicates a change in light intensity AND the global reset signal GlobalReset signal from the controller 60 is active, the conditional reset circuit Rl is conducting. Here“AND” indicates the logical AND operator. With the conditional reset circuit Rl in a conductive state, the voltage at the comparator node at the inverting input of the comparator Al (Vdiff) is reset to a known level. Thus, it stores the current photoreceptor signal Vpr on the memory capacitor Cl.
  • these EBVSs having EBPA of pixels as shown in Fig. 1 have been manufactured as integrated circuits using a silicon planar process on a single wafer.
  • the semiconductor devices e.g. MOS transistors, diodes and photo-diodes, polysilicon resistors, etc.
  • the area of a pixel 100 must be shared between the photo-sensitive devices, PD for example, and the rest of the circuit, as can be seen in Figs. 2A showing the frontside illumination architecture and 2B showing the backside illumination architecture, especially in Fig. 2C that shows the plan view.
  • the photo detectors PD cannot use all the light that hits the surface. Even if this issue can be mitigated by the use of a layer of micro-lenses, there will always be some part of the surface of the sensor that will absorb light radiation, without transforming it into a useful electrical signal.
  • the light that hits the non-photo-sensitive devices can have non- desired effects, since some of the characteristics of these devices can be altered by the light that impinges on them.
  • an MOS transistor contains some semiconductor p-n junctions, typically, that can capture photo -generated carriers and create an unwanted signal in response.
  • a more advanced process technology called back-side illumination (BSI)
  • BSI back-side illumination
  • FSI front-side illumination
  • the proposed invention gives advantages over both the BSI and the FSI approaches realized on a single wafer or die.
  • an Event-Based Vision Sensor is based on an array of pixels, which generate data in response of the light that hits them, each of them spatially and/or temporally independently from the others.
  • Each of these pixels contains circuitry that is divided between a photo-sensitive part (e.g., photodiode PD) and a non-photo-sensitive part (e.g., photoreceptor circuit PRC, capacitor Cl, comparator Al, memory 50 and reset circuit Rl).
  • This second part takes care of biasing the photo- sensitive circuitry, collecting the signal generated in response of the light and, frequently, of performing a first signal conditioning or elaboration. Examples of these types of pixel are referred in the previous section (State-of-the-art).
  • these sensors are manufactured as silicon integrated circuits (IC) based on a planar process.
  • IC silicon integrated circuits
  • the direct consequence of this is that part of the area of the pixel must be occupied by the non-photo- sensitive circuitry, effectively reducing the Fill Factor of the pixel (Figs. 2A, 2B and 2C).
  • the Quantum Efficiency is reduced. This is true also for back-side illuminated (BSI) ICs that are manufactured on a single wafer or die.
  • BSI back-side illuminated
  • the Fill Factor of a pixel in an Event-Based Vision Sensor can be maximized, by means of stacking multiple wafers or dies, with a technique known as Three-Dimensional Integrated Circuit (3D IC).
  • 3D IC Three-Dimensional Integrated Circuit
  • the circuitry that sits on a wafer below the top one does not receive any (or almost any) light radiation, which is captured by the top wafer, allowing for a great reduction of the unwanted behavior in the non-photo- sensitive circuitry due to the impinging light.
  • Another advantage of this approach is that the two wafers or dies can be manufactured using two different technologic processes, allowing to select the best available process for both the photo- sensitive devices and the rest of the circuit. It is often the case that the technology requirements of these two types of circuitry do not fully overlap.
  • FIGs. 3A-3C a first (and preferred) embodiment of the proposed invention is presented.
  • Figs. 3A and 3B the vertical section of the IC of an EBVS is depicted.
  • Fig. 3A the two stacked wafers (or dies) are shown. Connections to the bottom wafer (Wafer 2) are provide by wire bond pads 210 deposited on the top face of the top wafer (Wafer 1).
  • TSVs Thin-Silicon Vias
  • electrical connections are provided through the body of the top wafer Wafer 1.
  • These TSVs end in Cu-Cu connections CC, such as copper ball bumps. In this way, the electrical connections are extended from the bottom of the top wafer to the electrical circuits on the top of the bottom wafer Wafer 2.
  • a "die” is a piece or a portion of a semiconductor wafer, typically in a rectangular shape, such as a chip.
  • this piece of semiconductor wafer includes a portion of an instance of an integrated circuit device, such as the Event-Based Vision Sensor.
  • the reference to wafer or die is based on the potential for differed fabrication approaches.
  • the stacking can be performed at the wafer-level before dicing into dies. Or, the stacking can be performed on individual dies, after they have been cut away or diced from the wafer. Nevertheless, the final singulated device, i.e., EBVS, resulting from the fabrication process will be a stack of dies.
  • Fig. 3B shows the detail of the vertical section of a pixel of the EBVS. It’s possible to see how the light hits only the surface of the top wafer Wafer 1, on the substrate side (BSI), and how in this wafer only photo-sensitive devices are present, the photodiode PD. Then a single Cu-Cu connection CC per pixel can be used to connect to the bottom wafer (or die). In the bottom wafer Wafer 2, the non-photo- sensitive part of the pixel circuitry is implemented, comparator Al, for example.
  • Fig. 3C shows a pixel circuit diagram. This example refers to the pixel circuit presented in PCT/IB2017/058526 and U.S. Pub. No. 2018/0191972 and Fig. 1, but other architectures for event detection pixel can be used. It is shown how the circuit is distributed among the two wafers (or dies): on the top wafer/die Wafer 1 only the photo-diode PD is implemented, while the rest of the pixel circuit is implemented in the bottom wafer/die; in every pixel is present a Cu-Cu CC connection between the photo-diode and the photo receptive circuit. Also shown is the event detector on Wafer 2. The readout circuitry RO could be provided on Wafer 2 or still another wafer or die.
  • FIG. 3D the circuit is shown in further detail, by a circuit schematic of the photoreceptor circuit PRC, which is implemented on Wafer 2.
  • FIGs. 4A and 4B another embodiment is shown.
  • the circuit chosen for the pixel is the same as the one presented in Figs. 3C and 3D, but the circuit components are arranged differently among the wafers/dies.
  • a wafer/die partitioning like the one shown in Fig. 4B would also allow to realize the circuitry in the bottom wafer entirely with p-type MOSFET devices, since the Event-Detector can be realized with only p-type devices.
  • This approach is interesting because it allows to further reduce the pixel area, since the entire pixel field can be placed in the same n-well.
  • a minimum clearance between n-type MOSFET devices and an n-well (where the p-type MOSFET devices sit) must be respected. If there are no n-type devices and all the pixels are included in a single n-well, the area needed for a pixel can be smaller than in the case in which both types of devices are used in every pixel.
  • a Cu-Cu connection would typically introduce a certain resistance, due to the layers of metals and vias needed to reach the surface of one wafer and then the devices in the other, and this resistance introduces thermal noise.
  • this stack of metal layers and vias is typically manufactured using different metals, introducing a noise associated with this metal junctions.
  • a solution like the one presented in Figs. 4A and 4B can be beneficial, without sacrificing too much the benefit on the area occupation introduced in the first place by the wafer/die stacking technology.
  • such a solution allows shrinking the pixel size, because less area is needed on the lower wafer Wafer 2, because less transistors are needed.
  • An additional advantage of this solution (and other solutions that include transistors on both wafers) is that the properties of the two transistors on the upper wafer can be optimized independently from the properties of the transistors on the lower wafer Wafer 2.
  • a third embodiment is shown. This is again referred to the circuit presented in the previous embodiments, with the addition in the top wafer Wafer 1 of the bias transistor of the frontend circuit.
  • a buffer stage preferably realized as source-follower amplifier stage, can be added, as well.
  • This embodiment has the advantage of improving the driving capability of the circuit before the cross-wafer (cross-die) connection. Especially if a buffer stage is added, the resistance of the Cu-Cu CC connection affects less the performance of the frontend, since the load to the output node is reduced.
  • the Cu-Cu connection would be in this case between the output node of the frontend, which now consists, in one example, of the gate of Ml, the drain of MA1 and the drain of MA2, and the input of the event detector circuit, which corresponds to one of the plates of the capacitor Cl.
  • the capacitor Cl is typically manufactured as a MIM (metal-insulator-metal) capacitor.
  • MIM metal-insulator-metal
  • This type of device is manufactured using two metal layers that can be the topmost layers of the silicon planar process, or eventually one metal layer below the topmost.
  • the output of the circuit implemented in the top wafer could then be directly connected to the top plate of the MIM capacitor, allowing for the maximization of the dimension of the capacitor, since it can be realized by occupying the entire area of the pixel, since the circuit in the top wafer would not need to connect to any other node, except the top plate of the MIM capacitor Cl.
  • FIG. 5B It is a simplified representation of the two stacked wafers Wafer 1, Wafer 2, realized in silicon planar process technology, that depicts the various layers that form a wafer. It is possible to see how the last metal layer in the top wafer can be directly connected to one of the plates of the MIM capacitor Cl.
  • the top wafer/die is arranged as BSI (back-side illuminated), so it receives the light on the substrate side and it is connected to the other wafer using Cu-Cu connections CC, so that the two respective top metal layers of the two wafers/dies are bonded together.
  • the photo-diode junction depicted here is realized as an n-well in a p-substrate, but more advanced structures are actually preferred.
  • Fig. 5C and Fig. 5D show two different circuit layouts. Specifically, in Fig. 5D, the frontend that includes a buffer amplifier B (preferably a source-follower amplifier).
  • a buffer amplifier B preferably a source-follower amplifier
  • a FSI (front-side illuminated) approach for the top wafer could be realized, connecting it to the bottom wafer using TSVs (through- silicon vias).
  • the MIM capacitor Cl could be alternatively placed in the top wafer (or die) Wafer 1, instead of in the bottom wafer (or die) Wafer 2.
  • Such an approach could be justified, for example, by specific consideration on the metal layers, in order to distribute them among the wafers (or dies) according to the most cost-effective solution: the two wafers could be realized in two different technology, and in one of these technology processes adding metal layers, or MIM specific metal layers, could be less expensive than in the other process.
  • Fig. 6A an alternative embodiment for the pixel frontend is depicted. In this case, as well, only one cross-wafer (or die) connection per pixel is needed.
  • the top wafer Wafer 1 contains, together with the photo-diode PD, the transistors that form the pixel frontend including the photoreceptor circuit PRC.
  • Fig. 6B it is showed how it could be eventually added an amplification stage B, preferably realized as a source-follower amplifier, and how it could also be included in the top wafer Wafer 1.
  • the Cu-Cu CC connection can have a non-negligible resistance that would be directly loaded to the output of the frontend, if there was not any amplification stage.
  • Fig. 7 A the pixel fronted circuit, the similar to that shown in Fig. 5B, is depicted, with the amplification stage explicitly represented as a source-follower amplifier, realized with n-FET MOS devices.
  • the solution proposed here does not have p-FET devices in the top wafer, so the cross-wafer (or die) connections required are more than one, here specifically two per pixel.
  • Fig. 7C the alternative pixel frontend implementation is shown, the same as the one in Fig. 6B.
  • the top illuminated wafer does not contain p-FET MOS devices, and for this reason multiple cross-wafer (or die) connections per pixel are required, in this particular case, four per pixel are needed.
  • the buffer stage is realized as a source follower amplifier, consisting of n-FET devices.
  • Fig. 7C the same pixel architecture used in Fig. 7B is shown, but the source- follower amplifier that acts as buffer is realized with p-FET MOS devices. In this way, one connection per pixel can be spared, requiring only three.

Landscapes

  • Engineering & Computer Science (AREA)
  • Power Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Electromagnetism (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Ceramic Engineering (AREA)
  • Solid State Image Pick-Up Elements (AREA)
  • Metal-Oxide And Bipolar Metal-Oxide Semiconductor Integrated Circuits (AREA)
  • Internal Circuitry In Semiconductor Integrated Circuit Devices (AREA)
  • Semiconductor Integrated Circuits (AREA)

Abstract

An event-based vision sensor is fabricated using an advanced stacking technique, known as Three-Dimensional Integrated Circuit, which stacks more wafers (or dies) and interconnects them vertically. The electronic integrated circuits of the sensor are then distributed between the two or more electrically connected dies.

Description

EVENT-BASED VISION SENSOR MANUFACTURED WITH 3D-IC TECHNOLOGY
RELATED APPLICATIONS
[ 0001] This application claims the benefit under 35 USC 119(e) of U.S. Provisional Application No. 62/642,838, filed on March 14, 2018, which is incorporated herein by reference in its entirety.
BACKGROUND OF THE INVENTION
[ 0002 ] One of the important parameters in the design of an event-based pixel array
(also called Dynamic Vision Sensor (DVS)) is the quantum efficiency (QE), which is the ratio between the number of electrons generated in response of a light signal and the number of photons of that light signal. This parameter directly depends on the fill-factor (FF), which is the ratio between the area of the photo- sensitive device exposed to the light and all the area of the integrated circuit exposed to the light.
[ 0003 ] Since the event-based vision sensors today are realized using silicon planar process, the area exposed to the light must be shared between the photo- sensitive devices and the other semiconductor devices that form the pixel circuitry. This approach has two main disadvantages: the photo- sensitive devices area is limited and the circuitry that is not intended to be exposed to the light can have its performance degraded as a consequence of this radiation exposure.
SUMMARY OF THE INVENTION
[ 0004 ] The presented invention has the main purpose of mitigating these two issues by using in the fabrication of an event-based vision sensor an advanced stacking technique, known as Three-Dimensional Integrated Circuit, which stacks more wafers (or dies) and interconnects them vertically.
[ 0005] A number of motivations exist, including:
[ 0006 ] Increase FF;
[ 0007 ] Shield circuits that do not need/must not receive light; and
[ 0008 ] The different components of a pixels have different requirements, that can be best fulfilled by realizing them in different IC processes (the photo-sensitive devices could even theoretically be manufactured in a non- silicon-based technology, e.g. GaAs). [ 0009] In general, according to one aspect, the invention features an Event-Based Vision Sensor (EBVS) including stacked dies that are connected vertically. As a result, photo-sensitive devices of each pixel of the pixel array can be located in the die exposed to illumination and other devices not useful for light capture can be in other wafers or dies.
[ 0010] Preferably, there is at least one connection for every pixel of the pixel array between the dies.
[ 0011] Typically, photodiodes of each pixel of the pixel array are in a first die and respective event detectors of each pixel of the pixel array in a second die and
interconnections between the first and the die connect the photodiodes to respective event detectors.
[ 0012 ] This approach can be used with a frontside illumination architecture or backside illumination architecture.
[ 0013] Moreover, there are a number of different ways the photoreceptor circuit of each pixel of the pixel array can be implemented. For example, it can be located on the second die, or in the first die or distributed between the first die and the second die.
[ 0014 ] An additional amplification stage could be added in the first die.
[ 0015] Often n-FET transistors are used in the first wafer or die, and both n-FET and p- FET transistors are used in the second die.
[ 0016] In addition, the transistor properties between the transistors on the first die and the second die can be different including different gate oxide thicknesses or different implants.
[ 0017 ] In general, according to one aspect, the invention features a method for fabricating an Event-Based Vision Sensor. Generally, this method comprises fabricating different devices of each pixel of the pixel array in different wafers or dies and then stacking the wafers or dies.
[ 0018 ] As used here, a "die" is a piece or a portion of a semiconductor wafer, typically in a rectangular shape, such as a chip. Here, this piece of semiconductor wafer includes a portion of an instance of an integrated circuit device, such as the Event-Based Vision Sensor. The reference to wafer or die is based on the potential for differed fabrication approaches. The stacking can be performed at the wafer- level before dicing into dies. Or, the stacking can be performed on individual dies, after they have been cut away or diced from the wafer. As a result, the final device resulting from the fabrication process with be a stack of dies.
[ 0019] The method would then include connecting each of the pixels using Cu-Cu connections, for example.
[ 0020] In one implementation, the method further involves fabricating photodiodes of each pixel of the pixel array in a first wafer or die and fabricating respective event detectors of each pixel of the pixel array in a second wafer or die.
[ 0021] The above and other features of the invention including various novel details of construction and combinations of parts, and other advantages, will now be more particularly described with reference to the accompanying drawings and pointed out in the claims. It will be understood that the particular method and device embodying the invention are shown by way of illustration and not as a limitation of the invention. The principles and features of this invention may be employed in various and numerous embodiments without departing from the scope of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
[ 0022 ] In the accompanying drawings, reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale; emphasis has instead been placed upon illustrating the principles of the invention. Of the drawings:
[ 0023] Fig. 1: Circuit diagram showing a state of the art (SO A) pixel implementation for an event-based image sensor, e.g. according to PCT/IB2017/058526 and U.S. Pub. No. 2018/0191972, for example;
[ 0024 ] Figs. 2A-2C: A SOA event-based image sensor: single wafer (partial vertical cross-section in Fig. 2A and Fig. 2B, partial top view in Fig. 2C) that implements the sensor; Fig. 2A refers to a front-side illumination (FSI) application; Fig. 2B refers to a back-side illumination (BSI) application;
[ 0025] Figs. 3A-3D: A partial vertical cross-section of two (2) stacked wafers that show a preferred embodiment before dicing the wafers (Fig. 3A); partial vertical cross- section focused on one pixel, showing a back-side illuminated (BSI) top wafer with only the photo-diode (PD) and a single connection per pixel to the bottom wafer (Fig. 3B); a block diagram (Fig. 3C) and a circuit diagram (Fig. 3D) showing the details of the pixel frontend circuit and how it is arranged between the wafers/dies;
[ 002 6 ] Figs. 4A-4B: A partial vertical cross-section of two (2) stacked wafers that shows an alternative embodiment, where the top wafer also contains two transistors of the frontend circuit (Fig. 4A); a circuit diagram showing the details of the pixel frontend circuit and how it is arranged between the wafers/dies (Fig. 4B);
[ 0027 ] Figs. 5A-5D: A partial vertical cross-section of two (2) stacked wafers that shows the entire frontend in the top wafer, eventually including a source-follower stage (Fig. 5A); partial vertical cross-section focused on one pixel showing how the output from the first wafer can then directly connect to one of the plates of the MIM (metal-insulator- metal) capacitor of the event detector (which is located between the two topmost metals) on bottom wafer, this is shown in a simplified vertical section that includes details on the silicon process layers for the two stacked wafers (Figs. 5B); two circuit diagrams showing the details of the pixel frontend circuit also depicted (Figs. 5C and 5D);
[ 0028 ] Figs . 6A-6B : A circuit diagram showing an alternative pixel frontend circuit, with an additional p-FET that improves performance (Fig. 6A); a circuit diagram showing the separation between the wafers/dies (Fig. 6B);
[ 0029] Figs. 7A-7C: Three circuit diagrams showing variations of the pixel frontend circuit, in which multiple Cu-Cu connections in every pixel are needed; Fig. 7A is preferred; in Figs. 7B and 7C it is shown how the elements of the circuit could be arranged between the wafers/dies such that the top wafer does not contain p-FET devices, showing how in this case more than two cross-wafer (or -die) connection per pixel are required.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
[ 0030] The invention now will be described more fully hereinafter with reference to the accompanying drawings, in which illustrative embodiments of the invention are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
[ 0031] As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items. Further, the singular forms and the articles "a", "an" and "the" are intended to include the plural forms as well, unless expressly stated otherwise. It will be further understood that the terms: includes, comprises, including and/or comprising, when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements,
components, and/or groups thereof. Further, it will be understood that when an element, including component or subsystem, is referred to and/or shown as being connected or coupled to another element, it can be directly connected or coupled to the other element or intervening elements may be present.
[ 0032 ] Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
[ 0033 ] Definitions
[ 0034 ] An Event-Based Pixel Array (EBPA) is an array of pixels containing photo sensitive devices; these pixels, spatially and/or temporally independent from each other, generate discretized data as function of the light radiation that they receive.
[ 0035 ] An Event-based Vision Sensor (EBVS) is a sensor that outputs data extracted and/or elaborated from an EBPA.
[ 0036] The Fill Factor (FF) is defined as the ratio between the area of a pixel and the area of the photo- sensitive device presented in that pixel. It is a measure of how much of the total light radiation that hits the surface of a sensor can be effectively captured by the sensor.
[ 0037 ] The Quantum Efficiency (QE) is defined as the ratio between the number of photons that hit the surface of a photo- sensitive sensor and the number of electrons that are generated in response and transformed into an electrical signal.
[ 0038 ] 3D IC: acronym for Three-Dimensional Integrated Circuit; it is a technique to manufacture an integrated circuit by stacking silicon wafers or dies and interconnecting them vertically. [ 0039] Front-Side Illumination (FSI): type of image sensor that is realized as an integrated circuit (IC), such that it is illuminated from the top of the die, which is the side onto which the layers of the planar process are realized; all the devices and the metal routing, together with the photo- sensitive devices, receive direct light radiation.
[ 004 0 ] Back-Side Illumination (BSI): type of image sensor that is realized as IC, such that it is illuminated from the bottom of the die, which is the side of the substrate; the devices and the metal routing do not receive direct light radiation, but only through the substrate
[ 004 1 ] State-of-the-art
[ 0042 ] Examples of Event-Based Vision Sensor can be found, for examples, in PCT/IB2017/058526 or US7728269B2 or U.S. Pub. No. 2018/0191972.
[ 0043] An example of pixel architecture of an EBPA of an EBVS, which will be used as reference in this document, is shown in Fig. 1. It is taken from PCT/IB2017/058526 and U.S. Pub. No. 2018/0191972, which is incorporated herein by this reference in its entirety. However, the core concepts in the proposed invention can be applied to virtually any Event-Based Vision Sensor realized as IC, not depending on any specific pixel architecture used.
[ 0044 ] The major components of a pixel circuit are enumerated below.
[ 0045] 1. Photoreceptor module. As shown in the figure, the pixel circuit 100 contains a photodiode PD, or other photosensor, to measure impinging light and convert the light intensity to current Iphoto; a photoreceptor circuit PRC to generate a photoreceptor signal Vpr dependent on the light intensity; and a memory capacitor Cl to remember past photoreceptor signal. The photosensor PD and photoreceptor circuit PRC constitute the photoreceptor module PR.
[ 0046] 2. Memory capacitor C 1 : Receives the photoreceptor signal Vpr such that first plate of the capacitor carries a charge that is responsive to the photoreceptor signal Vpr and thus the light received by the photosensor PD and is part of the event detector ED. A second plate of the memory capacitor Cl is connected to the comparator node (inverting input) of Al. Thus the voltage of the comparator node, Vdiff, varies with changes in the photoreceptor signal Vpr. [ 0047 ] 3. Comparator Al: This is a means to compare the difference between current photoreceptor signal Vpr and past photoreceptor signal to a threshold and is part of the event detector ED. This comparator Al can be in each pixel, or shared between a subset (for example a column) of pixels. In the preferred embodiment the comparator will be integral to the pixel, with each pixel having a dedicated comparator Al.
[ 0048 ] 4. Memory: Memory 50 stores the comparator output based on a sample signal from the controller 60 and is part of the event detector ED. Memory can be a sampling circuit (for example a switch and a parasitic or explicit capacitor) or a digital memory circuit (a latch or a flip-flop). In one embodiment, the memory will be a sampling circuit and each pixel will have two memories.
[ 0049] 5. A conditional reset circuit Rl: Condition for reset is a combination of the state of the memorized comparator output and a reset signal applied by the controller 60 and is part of the event detector ED.
[ 0050 ] 6. Peripheral circuit components: The comparator Al and the memory 50 can be located in the pixel or in peripheral circuits (outside the pixel circuit).
[ 0051] The peripheral circuits contain a controller 60 which applies threshold signals to the comparator Al, sends control signals to memory 50 and selects times when the conditional reset circuit Rl becomes active.
[ 0052 ] The peripheral circuits may also contain a readout circuit, which reads the content of the memory 50, determines if the light intensity for a given pixel has increased, decreased, or unchanged, and sends the output (computed from the current memory value) to a processor.
[ 0053] In more detail, the comparator tells if light has increased and/or decreased. For Off event: if Vdiff is lower than the threshold Voff (on Vb), the comparator output is high, and this level is stored in the memory. This means a decrease is detected. If Vdiff is not lower than the threshold, the comparator output is low: no decrease detected.
[ 0054 ] The only difficulty is that for On event, a low comparator output means an increase, while high means no change; but for Off event high comparator output means decrease while low means no change.
[ 0055] So the readout must know the memory content and which threshold was applied. [ 0056] The pixel circuit 100 and controller 60 operate as follows.
[ 0057 ] A change in light intensity received by the photosensor PD will translate to a change in photoreceptor signal Vpr. When the reset circuit Rl is not conducting, the changes in Vpr will be reflected also in the voltage Vdiff at a comparator node at the inverting input (-) to the comparator Al. This occurs because the voltage across the memory capacitor Cl stays constant.
[ 0058 ] At times selected by the controller 60, the comparator Al compares the voltage at the comparator node at the second terminal of the memory capacitor Cl (Vdiff) to a threshold voltage Vb (from controller) applied to the non-inverting input (+) of the comparator Al.
[ 0059] The controller 60 operates the memory 50 to store the comparator output Vcomp. The memory 50 is typically implemented as part of the pixel circuit 100 as shown. In other embodiments, however, the memory 50 is implemented as part of column logic circuit (peripheral circuit, one per each column of the pixel array).
[ 0060] If the state of the stored comparator output held in the memory 50 indicates a change in light intensity AND the global reset signal GlobalReset signal from the controller 60 is active, the conditional reset circuit Rl is conducting. Here“AND” indicates the logical AND operator. With the conditional reset circuit Rl in a conductive state, the voltage at the comparator node at the inverting input of the comparator Al (Vdiff) is reset to a known level. Thus, it stores the current photoreceptor signal Vpr on the memory capacitor Cl.
[ 0061 ] Until now, these EBVSs having EBPA of pixels as shown in Fig. 1 have been manufactured as integrated circuits using a silicon planar process on a single wafer. Using this technology, the semiconductor devices (e.g. MOS transistors, diodes and photo-diodes, polysilicon resistors, etc.) can be arranged only on a single layer, they cannot be vertically stacked.
[ 0062 ] In this way, the area of a pixel 100 must be shared between the photo- sensitive devices, PD for example, and the rest of the circuit, as can be seen in Figs. 2A showing the frontside illumination architecture and 2B showing the backside illumination architecture, especially in Fig. 2C that shows the plan view. This means that the photo detectors PD cannot use all the light that hits the surface. Even if this issue can be mitigated by the use of a layer of micro-lenses, there will always be some part of the surface of the sensor that will absorb light radiation, without transforming it into a useful electrical signal.
[ 0063 ] Moreover, the light that hits the non-photo- sensitive devices can have non- desired effects, since some of the characteristics of these devices can be altered by the light that impinges on them. For example, an MOS transistor contains some semiconductor p-n junctions, typically, that can capture photo -generated carriers and create an unwanted signal in response.
[ 0064 ] A more advanced process technology, called back-side illumination (BSI), allows for improved use of the area available by exposing to the light the back-side of the wafer or die, which is the side of the silicon substrate. In this way there is more freedom for the routing metal connections, that can be placed over the photo-sensitive device in the pixel, while in a front-side illumination (FSI) technology the photo-sensitive device must be exposed to the light towards the top of the wafer, so no metal can be placed on top of the photo- sensitive devices, to maximize the light that is captured. The proposed invention gives advantages over both the BSI and the FSI approaches realized on a single wafer or die.
[ 0065 ] It is well known that an Event-Based Vision Sensor is based on an array of pixels, which generate data in response of the light that hits them, each of them spatially and/or temporally independently from the others.
[ 0066 ] Each of these pixels contains circuitry that is divided between a photo- sensitive part (e.g., photodiode PD) and a non-photo- sensitive part (e.g., photoreceptor circuit PRC, capacitor Cl, comparator Al, memory 50 and reset circuit Rl). This second part takes care of biasing the photo- sensitive circuitry, collecting the signal generated in response of the light and, frequently, of performing a first signal conditioning or elaboration. Examples of these types of pixel are referred in the previous section (State-of-the-art).
[ 0067 ] Typically, these sensors are manufactured as silicon integrated circuits (IC) based on a planar process. This means that the photo-sensitive part of a pixel and the rest of the circuitry must be realized using a single layer of semiconductor devices. The direct consequence of this is that part of the area of the pixel must be occupied by the non-photo- sensitive circuitry, effectively reducing the Fill Factor of the pixel (Figs. 2A, 2B and 2C). Hence, the Quantum Efficiency is reduced. This is true also for back-side illuminated (BSI) ICs that are manufactured on a single wafer or die. [ 0068 ] With the proposed invention, the Fill Factor of a pixel in an Event-Based Vision Sensor can be maximized, by means of stacking multiple wafers or dies, with a technique known as Three-Dimensional Integrated Circuit (3D IC).
[ 0069 ] Using this technology, it is possible to split the circuitry of a pixel between different wafers or dies, with the possibility of maximizing the area of photo-sensitive devices, since they can be overlapped to the non-photo-sensitive part of the circuit.
Moreover, the circuitry that sits on a wafer below the top one, does not receive any (or almost any) light radiation, which is captured by the top wafer, allowing for a great reduction of the unwanted behavior in the non-photo- sensitive circuitry due to the impinging light.
[ 0070] Another advantage of this approach is that the two wafers or dies can be manufactured using two different technologic processes, allowing to select the best available process for both the photo- sensitive devices and the rest of the circuit. It is often the case that the technology requirements of these two types of circuitry do not fully overlap.
[ 0071] Examples of Embodiments
[ 0072 ] In Figs. 3A-3C a first (and preferred) embodiment of the proposed invention is presented. In Figs. 3A and 3B the vertical section of the IC of an EBVS is depicted. In Fig. 3A, the two stacked wafers (or dies) are shown. Connections to the bottom wafer (Wafer 2) are provide by wire bond pads 210 deposited on the top face of the top wafer (Wafer 1). By using TSVs (Through-Silicon Vias) electrical connections are provided through the body of the top wafer Wafer 1. These TSVs end in Cu-Cu connections CC, such as copper ball bumps. In this way, the electrical connections are extended from the bottom of the top wafer to the electrical circuits on the top of the bottom wafer Wafer 2.
[ 0073] Note that in this description, die and wafer are used interchangeably. Generally, a "die" is a piece or a portion of a semiconductor wafer, typically in a rectangular shape, such as a chip. Here, this piece of semiconductor wafer includes a portion of an instance of an integrated circuit device, such as the Event-Based Vision Sensor. The reference to wafer or die is based on the potential for differed fabrication approaches. The stacking can be performed at the wafer-level before dicing into dies. Or, the stacking can be performed on individual dies, after they have been cut away or diced from the wafer. Nevertheless, the final singulated device, i.e., EBVS, resulting from the fabrication process will be a stack of dies.
[ 0074 ] Fig. 3B shows the detail of the vertical section of a pixel of the EBVS. It’s possible to see how the light hits only the surface of the top wafer Wafer 1, on the substrate side (BSI), and how in this wafer only photo-sensitive devices are present, the photodiode PD. Then a single Cu-Cu connection CC per pixel can be used to connect to the bottom wafer (or die). In the bottom wafer Wafer 2, the non-photo- sensitive part of the pixel circuitry is implemented, comparator Al, for example.
[ 0075] Fig. 3C shows a pixel circuit diagram. This example refers to the pixel circuit presented in PCT/IB2017/058526 and U.S. Pub. No. 2018/0191972 and Fig. 1, but other architectures for event detection pixel can be used. It is shown how the circuit is distributed among the two wafers (or dies): on the top wafer/die Wafer 1 only the photo-diode PD is implemented, while the rest of the pixel circuit is implemented in the bottom wafer/die; in every pixel is present a Cu-Cu CC connection between the photo-diode and the photo receptive circuit. Also shown is the event detector on Wafer 2. The readout circuitry RO could be provided on Wafer 2 or still another wafer or die.
[ 0076] In Fig. 3D the circuit is shown in further detail, by a circuit schematic of the photoreceptor circuit PRC, which is implemented on Wafer 2.
[ 0077 ] In Figs. 4A and 4B another embodiment is shown. The circuit chosen for the pixel is the same as the one presented in Figs. 3C and 3D, but the circuit components are arranged differently among the wafers/dies.
[ 0078 ] Moreover, a wafer/die partitioning like the one shown in Fig. 4B would also allow to realize the circuitry in the bottom wafer entirely with p-type MOSFET devices, since the Event-Detector can be realized with only p-type devices. This approach is interesting because it allows to further reduce the pixel area, since the entire pixel field can be placed in the same n-well. Typically, a minimum clearance between n-type MOSFET devices and an n-well (where the p-type MOSFET devices sit) must be respected. If there are no n-type devices and all the pixels are included in a single n-well, the area needed for a pixel can be smaller than in the case in which both types of devices are used in every pixel. [ 007 9 ] As shown in Fig. 4B, in the top wafer Wafer 1, together with the photo-diode PD, two n-FET transistors (Ml and MA1) from the photo-receptive circuit PRC are added. In this way the Cu-Cu connections CC across wafers/dies is between the node that connects to the gate of the feedback transistor Ml and the drain of MA1 and the node that connects to the drain of the bias p-FET transistor MA2 and the input of the event detector, which corresponds to one of the plates of the capacitor Cl in Fig. 1. Using this
arrangement, it is possible to improve the performance of the photo-detection, without reducing too much the fill factor and keeping low the complexity of the top wafer
(effectively limiting the number of process masks needed for manufacture). The performance is improved especially in terms of noise: a Cu-Cu connection would typically introduce a certain resistance, due to the layers of metals and vias needed to reach the surface of one wafer and then the devices in the other, and this resistance introduces thermal noise. Moreover, this stack of metal layers and vias is typically manufactured using different metals, introducing a noise associated with this metal junctions. For this reason, a solution like the one presented in Figs. 4A and 4B can be beneficial, without sacrificing too much the benefit on the area occupation introduced in the first place by the wafer/die stacking technology. Also, such a solution allows shrinking the pixel size, because less area is needed on the lower wafer Wafer 2, because less transistors are needed. An additional advantage of this solution (and other solutions that include transistors on both wafers) is that the properties of the two transistors on the upper wafer can be optimized independently from the properties of the transistors on the lower wafer Wafer 2.
[ 0080] In Figs. 5A-5D, a third embodiment is shown. This is again referred to the circuit presented in the previous embodiments, with the addition in the top wafer Wafer 1 of the bias transistor of the frontend circuit. Eventually, a buffer stage, preferably realized as source-follower amplifier stage, can be added, as well.
[ 0081] This embodiment has the advantage of improving the driving capability of the circuit before the cross-wafer (cross-die) connection. Especially if a buffer stage is added, the resistance of the Cu-Cu CC connection affects less the performance of the frontend, since the load to the output node is reduced. The Cu-Cu connection would be in this case between the output node of the frontend, which now consists, in one example, of the gate of Ml, the drain of MA1 and the drain of MA2, and the input of the event detector circuit, which corresponds to one of the plates of the capacitor Cl.
[ 0082 ] The biggest advantage of this approach, however, is related to the fact that the capacitor Cl is typically manufactured as a MIM (metal-insulator-metal) capacitor. This type of device is manufactured using two metal layers that can be the topmost layers of the silicon planar process, or eventually one metal layer below the topmost. The output of the circuit implemented in the top wafer could then be directly connected to the top plate of the MIM capacitor, allowing for the maximization of the dimension of the capacitor, since it can be realized by occupying the entire area of the pixel, since the circuit in the top wafer would not need to connect to any other node, except the top plate of the MIM capacitor Cl.
[ 0083] This approach, then, significantly eases the pixel layout, ultimately allowing for a smaller pixel, by distributing the devices wisely between the two wafers/dies.
[ 0084 ] This can be seen in Fig. 5B. It is a simplified representation of the two stacked wafers Wafer 1, Wafer 2, realized in silicon planar process technology, that depicts the various layers that form a wafer. It is possible to see how the last metal layer in the top wafer can be directly connected to one of the plates of the MIM capacitor Cl. In this specific representation the top wafer/die is arranged as BSI (back-side illuminated), so it receives the light on the substrate side and it is connected to the other wafer using Cu-Cu connections CC, so that the two respective top metal layers of the two wafers/dies are bonded together. For drawing simplicity, the photo-diode junction depicted here is realized as an n-well in a p-substrate, but more advanced structures are actually preferred.
[ 0085 ] Fig. 5C and Fig. 5D show two different circuit layouts. Specifically, in Fig. 5D, the frontend that includes a buffer amplifier B (preferably a source-follower amplifier).
[ 0086] A FSI (front-side illuminated) approach for the top wafer could be realized, connecting it to the bottom wafer using TSVs (through- silicon vias).
[ 0087 ] Using this same approach and the same arrangement of transistors between the two wafers (or dies), the MIM capacitor Cl could be alternatively placed in the top wafer (or die) Wafer 1, instead of in the bottom wafer (or die) Wafer 2. Such an approach could be justified, for example, by specific consideration on the metal layers, in order to distribute them among the wafers (or dies) according to the most cost-effective solution: the two wafers could be realized in two different technology, and in one of these technology processes adding metal layers, or MIM specific metal layers, could be less expensive than in the other process.
[ 0088 ] In all the wafer representations showed along the documents, on the side exposed to the light a layer stack that implements micro-lenses and/or light waveguides can be added, to improve the QE of the photo-receptors, but it has not been depicted in the figures for simplicity.
[ 0089] In Fig. 6A an alternative embodiment for the pixel frontend is depicted. In this case, as well, only one cross-wafer (or die) connection per pixel is needed. The top wafer Wafer 1 contains, together with the photo-diode PD, the transistors that form the pixel frontend including the photoreceptor circuit PRC.
[ 0090 ] In Fig. 6B it is showed how it could be eventually added an amplification stage B, preferably realized as a source-follower amplifier, and how it could also be included in the top wafer Wafer 1. This would have the advantage of improving the driving capabilities of the frontend, effectively limiting the load on the output node of the frontend (gate of Ml, drain of MA1 and drain of MA2). As a matter of fact, the Cu-Cu CC connection can have a non-negligible resistance that would be directly loaded to the output of the frontend, if there was not any amplification stage.
[ 0091] In Fig. 7 A the pixel fronted circuit, the similar to that shown in Fig. 5B, is depicted, with the amplification stage explicitly represented as a source-follower amplifier, realized with n-FET MOS devices. The solution proposed here does not have p-FET devices in the top wafer, so the cross-wafer (or die) connections required are more than one, here specifically two per pixel.
[ 0092 ] The choice of not having p-FET MOS devices in the top wafer has the advantage of improving the QE, because if n-wells are present in the illuminated wafer, they attract carriers generated by the impinging light, acting as parasitic photo-diodes connected between the supply voltage and ground.
[ 0093 ] In Fig. 7C the alternative pixel frontend implementation is shown, the same as the one in Fig. 6B. In this proposed embodiment, as in Fig. 7A the top illuminated wafer does not contain p-FET MOS devices, and for this reason multiple cross-wafer (or die) connections per pixel are required, in this particular case, four per pixel are needed. The buffer stage is realized as a source follower amplifier, consisting of n-FET devices. [ 0094 ] In Fig. 7C the same pixel architecture used in Fig. 7B is shown, but the source- follower amplifier that acts as buffer is realized with p-FET MOS devices. In this way, one connection per pixel can be spared, requiring only three.
[ 0095 ] In general, it is possible to section the pixel circuitry in many different ways, all of them with possible advantages and drawbacks. For example, if the focus is on finding the best technology for each of the parts of the circuit, it can be thought of separating between digital and analog part of the circuit. Moreover, the stacking technology does not pose theoretical limits to the number of wafers/dies that could be stacked together and connected vertically, by the use of Cu-Cu connections and TSVs, for instance. The connections can be placed in every pixel, or on the edges of the pixel array, for example once per column and/or once per row.
[ 0096] While this invention has been particularly shown and described with references to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims.

Claims

CLAIMS What is claimed is:
1. An Event-Based Vision Sensor including stacked dies that are connected vertically.
2. The sensor of claim 1, wherein photo- sensitive devices of each pixel of the pixel array are in the die exposed to illumination and other devices not useful for light capture are in other dies.
3. The sensor of claim 1, further comprising at least one connection for every pixel of the pixel array between the dies.
4. The sensor of claim 1, further comprising photodiodes of each pixel of the pixel array in a first die and respective event detectors of each pixel of the pixel array in a second die and interconnections between the first die and the second die to connect the photodiodes to respective event detectors.
5. The sensor of claim 4, wherein the first die uses a frontside illumination architecture.
6. The sensor of claim 4, wherein the first die uses a backside illumination architecture.
7. The sensor of claim 4, wherein a photoreceptor circuit of each pixel of the pixel array is located on the second die.
8. The sensor of claim 4, wherein a photoreceptor circuit of each pixel of the pixel array is located on the first die.
9. The sensor of claim 4, wherein a photoreceptor circuit of each pixel of the pixel array is distributed between the first die and the second die.
10. The sensor of claim 4, further comprising additional amplification stage in the first die.
11. The sensor of claim 4, further comprising a photo-sensitive device and multiple n-FET transistors in the first die, and both n-FET and p-FET transistors in the second die.
12. The sensor of claim 1, where the transistor properties between the transistors on the first die and the second die are different including different gate oxide thickness or different implants.
13. A method for fabricating an Event-Based Vision Sensor comprising:
fabricating different devices of each pixel of the pixel array in different wafers or dies; and
stacking the wafers or dies.
14. The method of claim 13, further comprising connecting each of the pixels using Cu-Cu connections.
15. The method of claim 13, further comprising fabricating photodiodes of each pixel of the pixel array in a first wafer or die and fabricating respective event detectors of each pixel of the pixel array in a second wafer or die.
16. The method of claim 15, wherein the first wafer or die uses a frontside illumination architecture.
17. The method of claim 15, wherein the first wafer or die uses a backside illumination architecture.
18. The method of claim 15, wherein a photoreceptor circuit of each pixel of the pixel array is located on the second wafer or die.
19. The method of claim 15, wherein a photoreceptor circuit of each pixel of the pixel array is located on the first wafer or die.
20. The method of claim 15, wherein a photoreceptor circuit of each pixel of the pixel array is distributed between the first wafer and the second wafer or die.
EP19716563.2A 2018-03-14 2019-03-08 Event-based vision sensor manufactured with 3d-ic technology Pending EP3766099A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862642838P 2018-03-14 2018-03-14
PCT/IB2019/051920 WO2019175733A1 (en) 2018-03-14 2019-03-08 Event-based vision sensor manufactured with 3d-ic technology

Publications (1)

Publication Number Publication Date
EP3766099A1 true EP3766099A1 (en) 2021-01-20

Family

ID=66102157

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19716563.2A Pending EP3766099A1 (en) 2018-03-14 2019-03-08 Event-based vision sensor manufactured with 3d-ic technology

Country Status (6)

Country Link
US (2) US10923520B2 (en)
EP (1) EP3766099A1 (en)
JP (2) JP2021516872A (en)
KR (2) KR20230170980A (en)
CN (1) CN112243536A (en)
WO (1) WO2019175733A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230224605A1 (en) * 2020-06-19 2023-07-13 Sony Semiconductor Solutions Corporation Imaging device
CN116324959A (en) * 2020-09-28 2023-06-23 索尼半导体解决方案公司 Electronic apparatus and method of controlling the same
JP2022108423A (en) * 2021-01-13 2022-07-26 ソニーセミコンダクタソリューションズ株式会社 Solid-state imaging element and imaging apparatus
US20240205567A1 (en) * 2022-12-15 2024-06-20 Prophesee Event sensor pixel with sensitivity and dynamic range optimization

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017009944A1 (en) * 2015-07-14 2017-01-19 オリンパス株式会社 Solid-state image pickup device
EP3576404A1 (en) * 2017-10-30 2019-12-04 Sony Semiconductor Solutions Corporation Solid-state image pickup element
EP3582491A1 (en) * 2017-10-30 2019-12-18 Sony Semiconductor Solutions Corporation Solid-state image pickup element

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6084229A (en) 1998-03-16 2000-07-04 Photon Vision Systems, Llc Complimentary metal oxide semiconductor imaging device
KR100610481B1 (en) 2004-12-30 2006-08-08 매그나칩 반도체 유한회사 Image sensor with enlarged photo detecting area and method for fabrication thereof
CN101204079B (en) 2005-06-03 2011-07-27 苏黎世大学 Photoarray for detecting time-dependent image data
EP3514831B1 (en) * 2009-12-26 2021-10-13 Canon Kabushiki Kaisha Solid-state image pickup apparatus and image pickup system
TWI513301B (en) * 2010-06-02 2015-12-11 Sony Corp Semiconductor device, solid-state imaging device, and camera system
US8637800B2 (en) * 2011-04-19 2014-01-28 Altasens, Inc. Image sensor with hybrid heterostructure
JP5959186B2 (en) * 2011-05-25 2016-08-02 オリンパス株式会社 Solid-state imaging device, imaging device, and signal readout method
JP5791571B2 (en) * 2011-08-02 2015-10-07 キヤノン株式会社 Imaging device and imaging apparatus
KR101887988B1 (en) * 2012-07-03 2018-08-14 삼성전자 주식회사 Image sensor chip, operation method thereof, and system having the same
JP6374869B2 (en) 2012-10-05 2018-08-15 ラムバス・インコーポレーテッド Multi-bit readout image sensor with conditional reset
KR20140056986A (en) * 2012-11-02 2014-05-12 삼성전자주식회사 Motion sensor array device, depth sensing system and method using the same
KR102136055B1 (en) 2014-01-08 2020-07-21 삼성전자 주식회사 Vision sensor chip having open-loop amplifier, method thereof, and data processing system including the same
US9564464B2 (en) * 2015-06-03 2017-02-07 Semiconductor Components Industries, Llc Monolithically stacked image sensors
WO2017013806A1 (en) * 2015-07-23 2017-01-26 オリンパス株式会社 Solid-state imaging device
JP6655922B2 (en) * 2015-09-15 2020-03-04 サムスン エレクトロニクス カンパニー リミテッド Solid-state imaging device
CN116995084A (en) * 2016-03-31 2023-11-03 株式会社尼康 Image pickup element and image pickup device
US20170337469A1 (en) * 2016-05-17 2017-11-23 Agt International Gmbh Anomaly detection using spiking neural networks
JP2017209045A (en) 2016-05-25 2017-11-30 日油株式会社 milk beverage
JP6651079B2 (en) 2016-05-25 2020-02-19 株式会社ササキコーポレーション Bulb loading section of bulb planting machine
KR102541757B1 (en) 2016-12-30 2023-06-13 소니 어드밴스드 비주얼 센싱 아게 Dynamic vision sensor architecture

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017009944A1 (en) * 2015-07-14 2017-01-19 オリンパス株式会社 Solid-state image pickup device
EP3576404A1 (en) * 2017-10-30 2019-12-04 Sony Semiconductor Solutions Corporation Solid-state image pickup element
EP3582491A1 (en) * 2017-10-30 2019-12-18 Sony Semiconductor Solutions Corporation Solid-state image pickup element

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of WO2019175733A1 *

Also Published As

Publication number Publication date
US10923520B2 (en) 2021-02-16
WO2019175733A1 (en) 2019-09-19
JP2021516872A (en) 2021-07-08
JP2023164990A (en) 2023-11-14
US11652126B2 (en) 2023-05-16
US20190288024A1 (en) 2019-09-19
KR20230170980A (en) 2023-12-19
KR20210028139A (en) 2021-03-11
US20210151492A1 (en) 2021-05-20
CN112243536A (en) 2021-01-19

Similar Documents

Publication Publication Date Title
US11652126B2 (en) Event-based vision sensor manufactured with 3D-IC technology
US10957724B2 (en) Single-photon avalanche diode image sensor with photon counting and time-of-flight detection capabilities
KR102430496B1 (en) Image sensing apparatus and manufacturing method thereof
US6815743B2 (en) CMOS imager and method of formation
US10070079B2 (en) High dynamic range global shutter image sensors having high shutter efficiency
US8089543B2 (en) Solid-state image pickup element and solid-state image pickup device
US9728575B1 (en) Pixel and circuit design for image sensors with hole-based photodiodes
US20220350041A1 (en) Silicon photomultipliers with split microcells
US11094734B2 (en) Imaging device
JP2024015381A (en) Imaging device
CN110970453B (en) Image pickup apparatus
WO2023008026A1 (en) Backside illuminated single photon avalanche diode
Kondo et al. A 3D stacked 16Mpixel global-shutter CMOS image sensor using 4 million interconnections
EP3605044B1 (en) Detector, methods for operating a detector and detector pixel circuit
US20230244449A1 (en) Random number generator
US11139335B2 (en) Assembly for detecting electromagnetic radiation and method of producing an assembly for detecting electromagnetic radiation
WO2022118617A1 (en) Imaging device

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20200904

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20220722