Linear Transfer Functions

Linear transfer function is an operator $T$ that adheres to the superposition principle (i.e. linearity)

\[T\{ax_1+bx_2\}=aT\{x_1\}+bT\{x_2\}\]

For any two object plane distributions $x_1$ and $x_2$ the transfer of the sum of these distributions can be written as the sum of the transfers of the distributions individually.

Linear transfer functions simplify the analysis because we can use the well established mathematical tools from linear algebra and linear operator theory on the systems. All though this is true in theory, the whole systems are often too huge to be used directly in the form of matrices.

You can get a transfer function of your optical setup by supplying parameters of the apparatus to a model transfer function that is developed from the underlining physics of a microscope, then you will use a subtype of Otherwise you can estimate the transfer function, most commonly by the means of an acquisition where the imaged sample is known such as sub-diffraction sized microspheres of known sizes.

Shift Invariant Linear Transfer Functions

TransferFunctions.LinearShiftInvariantTransferFunctionType
LinearShiftInvariantTransferFunction{N} <: LinearTransferFunction{N}

A supertype for all linear transfer functions.

A linear transfer function is that which ensures the system to have a linear response. This means that the system obeys the superposition principle, i.e. the response to a superposition of inputs is a superposition of the corresponding responses, and shift-invariance principle, i.e. the response to a signal is invariant to translation, which can be restated as "The single object in the object plane produces the same output in the image plane irrespective of its position within the object plane."

See also TransferFunctions

source
TransferFunctions.OpticalTransferFunctionType
OpticalTransferFunction{N} <: LinearShiftInvariantTransferFunction{N}

Implementation

To create a new Optical transfer function (OTF) A <: OpticalTransferFunction, you must define the attenuation at a given frequency coordinate attenuation(otf::A, kx::Frequency, ky::Frequency).

source
TransferFunctions.convMethod
conv(img::SpatialMatrix, ltf::LinearShiftInvariantTransferFunction)

Transfer the image img using the linear transfer function ltf, i.e. convolve the image with the equivalent PSF.

source
TransferFunctions.deconvMethod
deconv(img::SpatialArray, ltf::LinearShiftInvariantTransferFunction)

Deconvolve the image img that was transferred using the linear transfer function ltf using the specified algorithm.

source

Point Spread Functions

A point spread functions is a linear shift invariant transfer function that is defines the response of the system to a single point light source in the focal plane (or object space if the imaging in multiple 3D).

TransferFunctions.PointSpreadFunctionType
PointSpreadFunction{N} <: LinearShiftInvariantTransferFunction{N}

A point spread function is a description of a transfer functions specifying the intensity transfer of a single point source in the object plane into a region of the image plane.

Implementation

To create a new Point spread function (PSF) A <: PointSpreadFunction, you must define the intensity at a given length coordinate intensity(psf::A, x::Length, y::Length).

source

The response method gives the density of the point spread function at a given location relative to its center.

TransferFunctions.convMethod
conv(img::SpatialArray{<:Real,2}, tf::PointSpreadFunction, [border=:reflect]; <kwargs>)

Convolve the image img with the PSF tf. Additional arguments are passed to imfilter.

source

Estimation

Estimation methods for transfer functions are implemented in the Estimation module of the TransferFunctions package.

Non-Blind Methods

Non-blind are the methods for estimation that require a pair or a set of pairs of ground truth images along with their corresponding acquired raw images (ones that are noisy ground truth images blurred by the transfer function).

Ground Truth

The ground truth must often also be estimated using a model of the acquired scene. One option for the pair could be an acquisition of sub-diffraction microspheres or polymer fluorescent beads along with a model consisting of the beads known dimensions and estimated positions in the scene.