The Photoelectric Effect in Silicon Image Sensors: A Defense [Part: 1]
I was once told that the photoelectric effect, whereby an electromagnetic photon is absorbed by a material producing the liberation of an electron from the material's valence band into the conduction band, was not important, was "irrelevant" and indeed incorrect for silicon or semiconductor image sensor technologies.
TOSH...
In the forthcoming set of blogs I will point to academic literature in engineering, literature and textbooks from solid-state physics, various significant international high-profit companies, textbooks author'ed in part by University of Edinburgh academics and indeed University of Edinburgh undergraduate notes, to help put forward the correct interpretation.
Now, for my part, I've often had dynamic and heated debates regarding various topics in science, however I have never been told that I was 'digging yourself a hole', when there is clearly a miscommunication and indeed a lack of interpretation of sources. I was left with a rather angry sense of being unable to defend my views despite many primary sources to back myself up and indeed an overwhelming sense that the 'expert' in question was thinking of vacuum tube technologies rather than modern solid-state image sensors. At the time it was difficult as this 'expert' agreed that the photon liberates an electron from the valance band to conduction band, and therefore accepted the physical mechanism we were discussing, but not its name. Indeed no other name, that one could look up to rectify their knowledge base, was forthcoming.
So naturally the question arises:
- " If they agree on the mechanism, but not on the name of this process, then what process and name of that process is responsible for the optical to electrical signal conversion, at the quantum level, occurring in any solid-state image sensor, photo diode, or optical communications receiver? "
To start things off, lets look at the solid-state physics:
Source: "Subatomic Physics", 3rd edition, E. Henley and A. Garcia, World Scientific, 2007, ISBN-10: 981-270-056-0, pages 45
- " Photons interact with matter chiefly by three processes:
1) Photoelectric Effect
2) Compton Scattering
3) Pair Production "
- Notice, they list only three processes, therefore if we take this 'expert's opinion that optical to electrical conversion in solid-state image sensors is *not* the photoelectric effect, then he must be saying it is due to one of these other processes, namely Compton Scattering or Pair Production. So if that is the case, as he excluded the photoelectric effect, lets look into that these processes are:
- 1) Photoelectric Effect: " In the photoelectric effect, the photon is absorbed by an atom, and an electron from one of the shells is ejected. At low energies, below tens of keV, the photoelectric effect dominates, the Compton effect is small , and pair production is energetically impossible. Two of the three processes, photoelectric effect and pair production eliminate the photons undergoing interaction. "
- 2) Compton Scattering: " In the Compton effect, the photon scatters from an atomic electron. In Compton scattering, the scattered photon is degraded in energy. "
- 3) Pair Production: " In pair production, the photon is converted into an electron-positron pair. At an energy of (2)(m)(c^2), pair production becomes possible and it soon dominates completely. Two of the three processes, photoelectric effect and pair production eliminate the photons undergoing interaction. "
- So, I would say we can eliminate pair production very quickly, a) based on the high energy required which is more of a Gamma Ray or X-Ray energy level, and b) do CMOS image sensors work by a process that generates positrons. I hardly think so...
- We can also discard Compton Scattering as we would observe a) a secondary optical photon from the silicon surface, b) that secondary would have a lower wavelength equal to the change in energy exchanged as it rebounded from that atomic electron, and c) the energy imparted to the atomic electron may be enough to agitate it, but they do not mention it is ejected from the shell, indicating the atom remains at the same charge and is not ionized during the interaction.
- This all therefore seems to be a matter of energy of the original photon.
- So for typical image sensors, what wavelengths are we interested in? Well really we are often limited by the absorption coefficient of the material the sensor is made from. Typically however CMOS image sensors and Silicon SPADs are used between the near-UV so approx 300nm and the very near-IR so approx 1000nm. We can sometimes get sensitivity a little beyond, but by moving to Germanium or Indium-Gallium-Arsenide (InGaAs) detectors we can get into the 1um to perhaps 3um band. After that typical detectors require very odd materials such as Lead-Salt detectors (5um band etc). In the other direction at wavelengths below approx 300nm we start to require absorptive layers and charge sensitive amplifiers (X-Rays) or specific scintillator crystals that re-emit in a more user-friendly detectable wavelength band (Gamma-Rays).
- For my purposes, I was always limited to the band from 400nm to approx 750nm, i.e. firmly in the human visible region of the electromagnetic spectrum. What photon energies does this translate to?
- 450 nm > 2.75 eV
- 750 nm > 1.65 eV, most assuredly below that "tens of keV" discussed by Henley and Garcia.
- See: http://halas.rice.edu/conversions
- See: http://what-when-how.com/wp-content/uploads/2011/10/tmp1A49_thumb.jpg
- But that wavelength corresponds to that "tens of keV", well a rather tiny wavelength of 0.125nm, a very large distance from the prospective 450nm we observe with CMOS image sensors or the human eye. By the way 0.1nm is firmly in the X-Ray region.
- So, with all these in mind, it begs the question: " For the wavelength range of 450-750nm, what is the process in which photons are converted to electrons? ", Clearly there can only be one answer, and that is that for those photon energies and with the absence of any discussion of positrons or secondary photons of longer wavelength in the CMOS image sensor literature, the photo-electric effect is the *only* remaining physical process.
In later blog posts, I'll continue with this analysis with other texts, papers, patents, company literature, undergraduate notes and quotations from experts in the field. Clearly the single source discussed above is but a start to a robust rebuttal to this "expert" who suggested that the photoelectric effect was, and I quote, " irrelevant " for any research in Silicon or CMOS image or optical sensor design.
My suspicion, is that he has confused the internal and external photoelectric effects which revolves around this business of an electron being ejected from the atom's shell. Does it mean ejected from the surface of the material (external photoelectric effect) or does it mean just as it is stated by Henley and Garcia, that it is ejected from the atomic shell (internal photoelectric effect), and is therefore available to either a) move and therefore constitute a photo-generated current or b) modify the conductivity of the material? That, and the discussion and sources thereof, I will leave for a future blog post.