shallow introduction for deep learning retinal image analysis

225
Table of Contents Imaging Techniques & Eye Image Quality AI-enhanced Retinal Imaging Data Engineering Retinal Data Sources Labels and needed Data Quantity Active Learning Data Pre-processing Data Augmentation AI Frameworks CNN Architectures CNN Components CNN: Domain-specific issues Sparse ConvNets Compressed Sensing Feature Extraction & Understanding Transfer Learning Network optimization & Hardware retinal image analysis Shallow introduction for deep learning Petteri Teikari, PhD http://petteri-teikari.com/ version Thu 27 October 2016

Upload: petteri-teikari-phd

Post on 16-Apr-2017

2.294 views

Category:

Data & Analytics


27 download

TRANSCRIPT

  • Table of ContentsImaging Techniques & EyeImage QualityAI-enhanced Retinal ImagingData EngineeringRetinal Data SourcesLabels and needed Data QuantityActive LearningData Pre-processingData AugmentationAI FrameworksCNN ArchitecturesCNN ComponentsCNN: Domain-specific issuesSparse ConvNetsCompressed SensingFeature Extraction & UnderstandingTransfer LearningNetwork optimization & Hardware

    retinal image analysisShallow introduction for deep learning

    Petteri Teikari, PhDhttp://petteri-teikari.com/

    version Thu 27 October 2016

    http://petteri-teikari.com/

  • Introduction Purpose is to introduce 'something about everything' to make the

    communication easier with different people from different disciplines.

    Namely try to make biologists, clinicians, engineers, data scientists, statisticians and physicists understand each other in this practically multidisciplinary problem rather keeping them all in their own silos.

    Strive for 'systems engineering' solution where the whole pipeline would be intelligent rather than just the individual components.

    Presentation itself is quite dense, and better suitable to be read from a tablet/desktop rather than as a slideshow projected somewhere

    technologyreview.com, August 23, 2016 bv Olga Russakovsky

    We discuss recent progress and future directions for imaging in behaving mammals from a systems engineering perspective, which seeks holistic consideration of fluorescent indicators, optical instrumentation, and l analyses. http://dx.doi.org/10.1016/j.neuron.2015.03.055

  • IMAGING TECHNIQUES

    https://www.technologyreview.com/s/602157/ais-research-rut/?utm_campaign=socialflow&utm_source=facebook&utm_medium=posthttp://dx.doi.org/10.1016/j.neuron.2015.03.055

  • Eye & Retina

  • Image-forming characteristics of eye

    Artal (2015)http://dx.doi.org/10.1146/annurev-vision-082114-035905

    http://www.rhsmpsychology.com/Handouts/retina.htmhttp://www.rhsmpsychology.com/Handouts/retina.htmhttp://www.bio.miami.edu/tom/courses/bil265/bil265goods/11_vision.htmlhttp://antranik.org/the-eye-and-vision/

  • Image-forming characteristics of eye PurkinjePablo Artal:A light source illuminating the eye generates specular reflections at the different ocular interfaces (air-cornea, cornea-aqueous, aqueous-crystalline lens and lens- vitreous) that arecommonlynamed Purkinje images (PI, PII, PIII and PIV), after a Czech physiologist,Jan Purkinjemade use of them in the19thcentury. In the early times of Physiological Optics these reflections were the primary source used to obtain information of the ocular structures.

    http://typecast.qwriting.qc.cuny.edu/2012/05/21/purkinje-images/ pabloartal.blogspot.co.uk/2009/02

    http://dx.doi.org/10.1364/OE.14.010945

    Cornsweet, T. N. , and H. D. Crane. Accurate two-dimensional eye tracker using first and fourth Purkinje images. Journal of the Optical Society of America 63, no. 8 (1973): 921-928.http://dx.doi.org/10.1364/JOSA.63.000921

    http://dx.doi.org/10.1007/978-94-011-5698-1_36

    http://dx.doi.org/10.1177/0748730409360888

    Multispectral Fundus camera

    Lens Absorption Monitor (LAM)based on Purkinje images

    Notice know that the PIV image is inverted version of all the other reflections and can be relatively easily identified automatically using computer vision techniques.

    Now the difference between PIII and PIV images can be used to quantify misalignment of ocular surfaces, which is useful for example after implantation of intraocular lenses (IOL) after cataract surgery,

    And to measure the crystalline lens absorbance of the in vivo human eye(e.g. already done by Said and Weale in 1959, Gerontologia 1959;3:213231, doi:10.1159/000210900

    ). Now in practice the higher dynamic range the camera has, the better.

    http://dx.doi.org/10.1146/annurev-vision-082114-035905http://dx.doi.org/10.1146/annurev-vision-082114-035905

  • Spectral characteristics of the eye

    The eye is composed of several layers, each different in structure, absorption and scattering properties. faculty.cua.edu

    Teikari thesis (2012)

    Enezi et al. 2011.Stockmann And Sharpe (2000), CVRLGovardovskii et al. 2000

    van de Kraats and van Norren 2007Walraven 2003 CIE Report

    Styles et al. (2005)

    The Annidis RHA system combines advanced multispectral imaging (MSI) technology with multi-image software processing for early detection of ocular pathologies such as age related macular degeneration, diabetic retinopathy and glaucoma. http://www.annidis.com/page/technology

    http://pabloartal.blogspot.co.uk/2009/02/eyes-are-not-centered-optical-systems.htmlhttp://typecast.qwriting.qc.cuny.edu/2012/05/21/purkinje-images/http://pabloartal.blogspot.co.uk/2009/02/eyes-are-not-centered-optical-systems.htmlhttp://dx.doi.org/10.1364/OE.14.010945http://dx.doi.org/10.1364/JOSA.63.000921http://dx.doi.org/10.1007/978-94-011-5698-1_36http://dx.doi.org/10.1177/0748730409360888http://dx.doi.org/10.1159/000210900

  • Birefringent properties of eye i.e. polarization

    Eigenvectors associated to the birefringent structure of the eye in double-pass (central cornea-fovea), for three different subjects: (a) 2 mm; (b) 5 mm of pupil diameter.CRandCL, circular polarization states; LH, L45and L45, linear polarization states.

    http://dx.doi.org/10.1016/S0042-6989(00)00220-0

    http://dx.doi.org/10.1038/sj.eye.6702203

    http://dx.doi.org/10.1001/archopht.121.7.961

    http://dx.doi.org/10.1364/JOSAA.4.000082

    http://dx.doi.org/10.1167/iovs.03-1160http://dx.doi.org/10.1364/AO.21.003811

    http://dx.doi.org/10.1016/j.preteyeres.2011.06.003

    Example of tissue discrimination based on PS-OCT. (A) intensity image, (B) pseudo color coded structural images. The light brown corresponds to conjunctiva, green indicates sclera, dark yellow indicates trabecular meshwork, blue indicates cornea, and red indicates uvea. (reprinted fromMiyazawa etal. (2009)).

    Different performances of RPE detection in a patient with neovascular AMD. (A) Cirrus OCT (Zeiss Meditec), (B) Spectralis OCT (Heidelberg Engineering), (C) PS-OCT. (D) Retinal thickness map (inner limiting membrane to RPE) retrieved from Cirrus OCT, (E) retinal thickness map retrieved from Spectralis OCT, (F) retinal thickness map obtained with PS-OCT (areas with RPE atrophy are marked in gray). The arrows in (C) and (F) point to locations with RPE atrophy (reprinted fromAhlers etal. (2010)).

    ...differentiation between different highly backscattering layers is difficult because of the

    heavily distorted retinal structure. Therefore, the automated RPE segmentation provided by

    commercial instruments often yields erroneous results (c.f. bottom red B). Moreover, the

    commercially available RPE detection algorithms fail to detect RPE atrophies. Since RPE segmentation using PS-OCT data is based on an intrinsic tissue

    specific contrast, RPE atrophies can be detected even in such a case of RPE distortion

    Note the presence of multiple locations of RPE atrophy that can only be detected with PS-OCT (c.f.

    arrows inF). These atrophies might give an explanation why in these patients after restoration of

    retinal anatomy that is visible in OCT B-scans (e.g. after antiangiogenic treatment) the visual acuity is

    not improved - Ahlers etal., 2010.

    http://faculty.cua.edu/ramella/retina_research.php.htmlhttp://petteri-teikari.com/pdf/teikariPetteriThesis_v12-12-24_150dpi.pdfhttp://dx.doi.org/10.1177/0748730411409719http://www.cvrl.org/cones.htmhttp://europepmc.org/abstract/med/11016572http://dx.doi.org/10.1364/JOSAA.24.001842http://dx.doi.org/10.1117/12.595292http://www.annidis.com/page/technology

  • Nonlinear Optical Susceptibility of the Eye #1

    Multimodal nonlinear imaging of intact excised human corneas.Adapted from IOVS 2010 and Opt Express 2010.portail.polytechnique.edu

    Nonlinear microscopies have the unique ability to provide micrometer-scale 3D images from within complex, scattering sampleslike biological tissues.

    In particular, third-harmonic generation (THG) microscopy detectsinterfaces and optical heterogeneitiesand provides 3D structural images of unstained biological samples. This information can be combined with other nonlinear signals such as two photon microscopy (2-PM) and second harmonic generation (SHG).

    Since THG is a coherent process, signal generation must be properly analyzed in order to interpret the images. We study the contrast mechanisms in THG microscopy (phase matching, nonlinear susceptibilities), and we develop novel applications such as :

    imaging morphogenesis in small animal models (zebrafish, drosophila), imaging lipids in cells and tissues, polarization-resolved THG analysis of organized media: human cornea, skin

    lipids, etc.

    Jablonski diagrams showing linear vs. non-linear fluorescence. In linear single-photon excitation, the absorption of short wavelength photons results in a longer wavelength fluorescence emission. In non-linear two-photon excitation (2PE), the absorption of two long wavelength photons results in a shorter wavelength fluorescence emission. The techniques of second and third harmonic generation fluorescence microscopy (SHG and THG, respectively) elicit a non-linear optical (NLO) response in molecules that lack a center of symmetry. When multiple longwave photons are simultaneously absorbed by these molecules, photons that are or of the original wavelength are emitted.alluxa.com/learning-center

    http://dx.doi.org/10.1364/AOP.3.000205

    http://dx.doi.org/10.1016/S0042-6989(00)00220-0http://dx.doi.org/10.1038/sj.eye.6702203http://dx.doi.org/10.1001/archopht.121.7.961http://dx.doi.org/10.1364/JOSAA.4.000082http://dx.doi.org/10.1167/iovs.03-1160http://dx.doi.org/10.1364/AO.21.003811http://dx.doi.org/10.1016/j.preteyeres.2011.06.003http://www.sciencedirect.com/science/article/pii/S1350946211000395#bib98http://www.sciencedirect.com/science/article/pii/S1350946211000395#bib1http://www.sciencedirect.com/science/article/pii/S1350946211000395#bib1

  • Nonlinear Optical Susceptibility of the Eye #2

    https://dx.doi.org/10.1167%2Fiovs.15-16783

    http://dx.doi.org/10.1117/1.3183805

    http://dx.doi.org/10.1002/lpor.200910024

    http://www.molvis.org/molvis/v21/538

    Understanding the mechanical behavior of the Optic Nerve Head (ONH) is important for understanding the pathophysiology of glaucoma. We have developed an inflation test that uses second harmonic generation (SHG) imaging and digital volume correlation (DVC) to measure the deformation response of the lamina cribrosa, the connective tissue structure of the ONH, to controlled pressurization. Human eyes were obtained from a tissue bank.

    http://dx.doi.org/10.1007/978-3-319-21455-9_2

    We are also using two-photon and single harmonic generation with confocal imaging to investigate the extracellular attachments between the trabecular meshwork and Schlemm's canal. These recent studies show a decrease in elastin near the base of Schlemm's canal glaucoma eyes which may affect the mechano-sensitive environment and disrupt outflow. In conclusion, we are utilizing multiple imaging modalities to answer questions regarding fluid flow patterns, local and global relationships within the eye, and morphological changes that occur in glaucoma.

    journals.cambridge.org

    http://dx.doi.org/10.1098/rsif.2015.0066

    (a) A typical PFO/DOFA map of a human ONH. (b) A typical SHG image of the same ONH. PFO/DOFA maps were overlaid and aligned with SHG images to allow identification of the scleral canal margin in PFO/DOFA maps. (c) The ONH was subdivided into: the LC, terminating at the edge of the scleral canal; an insertion region, defined as an annular ring extending 150 mm from the scleral canal margin; and a peripapillary scleral region, defined as an annular ring extending from 150 to 1000 mm from the scleral canal margin. (d) The LC was subdivided into 12 regions for analysis. S, superior; N, nasal; I, inferior; T, temporal.

    http://dx.doi.org/10.1097/ICO.0000000000000015http://dx.doi.org/10.1117/12.2077569

    https://portail.polytechnique.edu/lob/en/third-harmonic-generation-thg-microscopyhttp://www.alluxa.com/learning-center/item/147-thin-film-optical-components-for-use-in-non-linear-optical-systemshttp://dx.doi.org/10.1364/AOP.3.000205

  • PATHOLOGIES

    Diabetic retinopathyhttp://www.coatswortheyeclinic.co.uk/photography/3102576

    http://www.maskelloptometrists.com/glaucoma/

    Macular degenerationhttp://sutphineyecare.com/Macular_Degeneration.html

    Retinal Diseases Signs In One Picturehttp://www.ophthnotes.com/retinal-diseases-signs-in-one-picture/

    + Bone spicule pigments (BSP) in Retinitis pigmentosa (RP), Chorioretinal Atrophy, Congenital hypertrophy of the retinal pigment epithelium (CHRPE), Asteroid hyalosis, Haemangioma, Choroidal neovascularization (CNV), Retinoschisis, etc.

    https://dx.doi.org/10.1167%2Fiovs.15-16783http://dx.doi.org/10.1117/1.3183805http://dx.doi.org/10.1002/lpor.200910024http://www.molvis.org/molvis/v21/538http://dx.doi.org/10.1007/978-3-319-21455-9_2http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=10426322&fileId=S1431927616006814http://dx.doi.org/10.1098/rsif.2015.0066http://dx.doi.org/10.1097/ICO.0000000000000015http://dx.doi.org/10.1117/12.2077569

  • Imaging Techniques

    www.optometry.iu.edu

    http://www.coatswortheyeclinic.co.uk/photography/3102576http://sutphineyecare.com/Macular_Degeneration.htmlhttp://www.ophthnotes.com/retinal-diseases-signs-in-one-picture/

  • IMAGING TECHNIQUES 2D Fundus photography : Old school imaging

    http://dx.doi.org/10.5772/58314

    Image is like any other digital image, and is degraded by combination of fixed-pattern, random, and banding noise.

    http://www.cambridgeincolour.com/tutorials/image-noise.htm

    Diffuse illuminationDirect illumination, optic section technique.

    Direct illumination, parallelepiped technique

    A. Fundus retro illumination.

    Indirect illumination, scleral scatter technique Indirect illumination

    Conical beam illuminationB. Iris retro illumination

    http://dx.doi.org/10.1038/eye.2011.1https://www.urmc.rochester.edu/eye-institute/research/retina-research.aspxhttp://www.optometry.iu.edu/faculty-research/directory/miller-donald-t/index.shtml

  • Fundus Components

    Almazroa et al. (2015). doi:10.1155/2015/180972

    Hoover and Goldbaum (2003), + STARE doi:10.1109/TMI.2003.815900

    Abdullah et al. (2016) https://doi.org/10.7717/peerj.2003Disc + MaculaGirard et al. (2016)

    http://dx.doi.org/10.5772/58314http://www.cambridgeincolour.com/tutorials/image-noise.htm

  • Additional features

    Annunziata et al. (2015)

    In certain pathologies, the treatmeant itself may leave features making automatic analysis of disease progression more difficulthttp://www.coatswortheyeclinic.co.uk/photography/3102576

    Vascular enhancement with fluorescein dye (angiography, emedicine.medscape.com/article/1223882-workup)

    http://dx.doi.org/10.1155/2015/180972http://cecas.clemson.edu/~ahoover/stare/nerve/index.htmlhttp://dx.doi.org/10.1109/TMI.2003.815900https://doi.org/10.7717/peerj.2003http://dx.doi.org10.1117/12.2216397http://homes.esat.kuleuven.be/~mblaschk/projects/retina/qualitativeResults.png

  • OCT Optical coherence tomography

    http://dx.doi.org/10.5772/58314

    http://dx.doi.org/10.5772/58314

    Three methods that use low coherence interferometry to acquire high resolution depth information from the retina. (A) Time domain OCT. (B) Spectral or Fourier domain OCT. (C) Swept source OCT. Williams (2011)

    http://www.slideshare.net/DrPRATIK189/oct-62435607by Pratik Gandhi

    http://dx.doi.org/10.1109/JBHI.2015.2440091http://www.coatswortheyeclinic.co.uk/photography/3102576http://emedicine.medscape.com/http://www.cs.rug.nl/~imaging/databases/retina_database/retinalfeatures_database.html

  • OCT Scan terminology

    http://www.slideshare.net/sealdioftal/oct-presentation

    http://dx.doi.org/10.5772/58314http://dx.doi.org/10.5772/58314http://dx.doi.org/10.1016/j.visres.2011.05.002http://www.slideshare.net/DrPRATIK189/oct-62435607http://www.slideshare.net/aryalmanu/optical-coherence-tomography-37170447

  • OCT Scan procedures

    OCT scan settings used for simulation of the repeatability of different thickness estimates. Oberwahrenbrock et al. (2015)

    https://www.youtube.com/watch?v=_4U3QTrDupE

    https://www.youtube.com/watch?v=KKqy8mSFSC0

    http://www.slideshare.net/sealdioftal/oct-presentation

  • OCT Image modelOCT images are corrupted by multiplicative speckle noise: The vast majority of surfaces, synthetic or natural, are extremely rough on the scale of the wavelength. Images obtained from these surfaces by coherent imaging systems such as laser, SAR, and ultrasound suffer from a common phenomenon called speckle. wikipedia.org

    Results of applying different speckle compensation methods on the human retina imagery. Cameron et al. (2013)

    Comparison of method by Bian et al. (2013) with the other four popular methods. Input: 8 frames of the pig eye data. (a) is the original image in log transformed space, while (b) is the averaged image of 455 registered frames. (c) is the averaged image of the input 8 frames, and (d)-(g) are the recovered results of four popular methods. The result of our method is shown in (h). The two clipped patches on the right of each subfigure are closeups of the regions of interest.

    Fourier-domain optical coherence tomography (FD-OCT) image of optical nerve head, before (A) and after (B) curvelet coefficients shrinkage-based speckle noise reduction Jian et al. (2009)

    http://dx.doi.org/10.1371/journal.pone.0137316https://www.youtube.com/watch?v=_4U3QTrDupEhttps://www.youtube.com/watch?v=KKqy8mSFSC0

  • OCT Components

    Kraus et al. (2014)

    (a, b) Final segmentation on the original image. (c) Definition of eleven retinal surfaces (surfaces 1 11), ILM = internal limiting membrane, NFL = nerve fiber layer, GCL = ganglion cell layer, IPL = inner plexiform layer, INL = inner nuclear layer, OPL = outer plexiform layer, ONL = outer nuclear layer, ISP-TI = Inner segment of photoreceptors, transition to outer part of inner segment, ISP-TO = Inner segment of photoreceptors, start of transition to outer segment, RPE = retinal pigment epithelium

    Kafieh et al. (2013)

    Automated segmentation of 7 retinal layers. NFL: Nerve Fiber Layer, GCL + IPL: Ganglion Cell Layer + Inner Plexiform Layer, INL: Inner Nuclear Layer, OPL: Outer Plexiform Layer, ONL: Outer Nuclear Layer, OS: Outer Segments, RPE: Retinal Pigment Epithelium.

    Hendargo et al. (2013)

    https://en.wikipedia.org/wiki/Speckle_noisehttp://dx.doi.org/10.1364/BOE.4.001769http://arxiv.org/abs/1312.1931http://dx.doi.org/10.1364/OL.34.001516

  • http://dx.doi.org/10.1364/BOE.5.002591https://arxiv.org/pdf/1210.0310.pdfhttp://dx.doi.org/10.1364/BOE.4.000803

  • Scanning laser ophthalmoscope SLO

    http://dx.doi.org/10.5772/58314

    https://en.wikipedia.org/wiki/Scanning_laser_ophthalmoscopy

    Image from a patient with autosomal dominant RP. The background is an infra-red SLO image from the Heidelberg Spectralis. The line indicates the location of the SD-OCT scan, which goes through fixation. The SD-OCT scan shows that photoreceptors are preserved in the central macula A reduced-scale AOSLO montage is aligned and superimposed on the background image. The insets are full scale-sections of the AOSLO montage at two locations indicated by the black squares. Godara et al. (2010)

    http://dx.doi.org/10.1016/j.preteyeres.2015.07.007

  • What modality is the best for diagnosis?From old school methods (visual field, fundus photograph and SD-OCT), the SD-OCT seems to offer clearly the best diagnostic capability

    Results:Among the four specialists, the inter-observer agreement across the three diagnostic tests was poor for VF and photos, with kappa ( ) values of 0.13 and 0.16, respectively, and moderate for OCT, with value of 0.40. Using panel consensus as reference standard, OCT had the highest discriminative ability, with an area under the curve (AUC) of 0.99 (95% 0.961.0) compared to photograph AUC 0.85 (95% 0.730.96) and VF AUC 0.86 (95% 0.760.96), suggestive of closer performance to that of a group of glaucoma specialists. Blumberg et al. (2016)

    For the analysis of the performance of each test modality, the scores from each rater were summed to a composite, ordinal measure. Curves and AUC values are shown for each diagnostic test summed across all specialists. AUC for VF was 0.86, for photo was 0.85, and for OCT was 0.99. The blue line corresponds to VF, red to photos, and green to OCT, and the straight line to the reference standard

    Optic disc photograph, visual field, and SD-OCT for representative patient A.

    http://dx.doi.org/10.1167/iovs.15-18931

    http://dx.doi.org/10.5772/58314http://www.opt.indiana.edu/people/faculty/burns/centerforophthalmicimaging/aoslo.htmhttps://en.wikipedia.org/wiki/Scanning_laser_ophthalmoscopyhttp://dx.doi.org/10.1097/OPX.0b013e3181ff9a8b

  • Future of OCT and retinal biomarkers From Schmidt-Erfurth et al. (2016): The therapeutic efficacy of VEGF inhibition in combination with the potential of

    OCT-based quantitative biomarkers to guide individualized treatment may shift the medical need from CNV treatment towards other and/or additional treatment modalities. Future therapeutic approaches will likely focus on early and/or disease-modifying interventions aiming to protect the functional and structural integrity of the morphologic complex that is primarily affected in AMD, i.e. the choriocapillary - RPE photoreceptor unit. Obviously, new biomarkers tailored towards early detection of the specific changes in this functional unit will be required as well as follow-up features defining the optimal therapeutic goal during extended therapy, i.e. life-long in neovascular AMD. Three novel additions to the OCT armamentarium are particularly promising in their capability to identify the biomarkers of the future:

    Polarization-sensitive OCT OCT angiography Adaptive optics imaging

    this modality is particularly appropriate to highlight early features during the pathophysiological development of neovascular AMD

    Findings from studies using adaptive optics implied that decreased photoreceptor function in early AMD may be possible, suggesting that eyes with pseudodrusen appearance may experience decreased retinal (particularly scotopic) function in AMD independent of CNV or RPE atrophy.

    ...the specific patterns of RPE plasticity including RPE atrophy, hypertrophy, and migration can be assessed and quantified). Moreover, polarization-sensitiv OCT allows precise quantification of RPE-driven disease at the early stage of drusen,

    Angiographic OCT with its potential to capture choriocapillary, RPE, and neuroretinal fetures provides novel types of biomarkers identifying disease pathophysiology rather than late consecutive features during advanced neovascular AMD.

    Schlanitz et al. (2011)

    zmpbmt.meduniwien.ac.atSee also Leitgeb et al. (2014)

    Zayit-Soudry et al. (2013)

    http://dx.doi.org/10.1167/iovs.15-18931http://dx.doi.org/10.1167/iovs.15-18931http://iovs.arvojournals.org/issues.aspx?issueid=935468&journalid=177

  • Polarization-sensitive OCT

    Features of Retinal Pigment Epithelium (RPE) evaluated on PS-OCT. Color fundus photographs (1a4a); PS-OCT RPE thickness maps (1b4b); and PS-OCT RPE segmentation B-scans (1c4c) corresponding to the yellow horizontal lines in the en-face images. Images illustrate examples of RPE atrophy ([1ac], dashed white line); RPE thickening ([2ac], yellow circle); RPE skip lesion ([3ac], white arrow) and RPE aggregations ([4ac]: yellow arrows). Roberts et al. (2016)

    Color fundus photography (a), late phase fluorescein angiography (b), PS-OCT imaging (cj), and conventional SD-OCT imaging (ko) of the right eye of a patient with subretinal fibrosis secondary to neovascular AMD. Retardation en face (c), pseudoscanning laser ophthalmoscope (SLO) (d), median retardation en face (e), and the axis en face map thresholded by median retardation (f) show similarity with standard imaging (a, b). In the averaged intensity (g), depolarizing material (h), axis orientation (i), and retardation B-scans (j) from PS-OCT the scar complex can be observed as subretinal hyperreflective and birefringent tissue. The retinal pigment epithelium is absent in the area of fibrosis (h); however, clusters of depolarizing material are consistent with pigment accumulations in (a). Note the column-like pattern in the axis orientation B-scan image (i) reflecting the intrinsic birefringence pattern of collagenous fibers in fibrous tissue. Tracings from PS-OCT segmentation (f) were overlayed on color fundus photography (a) to facilitate the comparison between the two imaging modalities. The retinal thickness map (k), central horizontal (l), and vertical (m) B-scans as well as an ETDRS-grid with retinal thickness (n), and the pseudo-SLO (o) of the fibrous lesion generated from conventional SD-OCT (Carl Zeiss Meditec) are shown for comparison. Color scales: 0 to 50 for retardation en face (c), 90 to +90 for axis orientation (f, i), 0 to +90 for median retardation (e), and retardation B-scan (j). Roberts et al. (2016)b

    http://dx.doi.org/10.1016/j.preteyeres.2015.07.007http://dx.doi.org/10.1167/iovs.10-6846http://www.zmpbmt.meduniwien.ac.at/forschung/optical-imaging/functional-imaging/doppler-oct/http://dx.doi.org/10.1016/j.preteyeres.2014.03.004http://dx.doi.org/10.1167/iovs.13-12433http://dx.doi.org/10.1016/j.preteyeres.2015.07.007http://dx.doi.org/10.1016/j.preteyeres.2015.07.007http://dx.doi.org/10.1016/j.preteyeres.2015.07.007

  • OCT angiography

    OCT Angiographyand Fluorescein Angiography of Microaneurysms in diabetic retinopathy. The right eye (A) and left eye (B) of a 45 year old Caucasian man with non-proliferative diabetic retinopathy using the swept source optical coherence tomography angiography (OCTA) prototype (A1) Fluorescein angiography (FA) cropped to approximately 6 x 6 mm. Aneurysms are circled in yellow. (A2) Full-thickness (internal limiting membrane to Bruchs membrane) 6 x 6 mm OCT angiogram. (B1) FA cropped to approximately 3 x 3 mm. Aneurysms are circled in yellow. (B2) Full-thickness 3 x 3 mm OCT angiogram, which provides improved detail over 6 x 6 mm OCT angiograms, demonstrates higher sensitivity in detecting micro vascular abnormalities. FAZ appears enlarged. Aneurysms that are seen on FA in B1 that are also seen on OCTA are circled in yellow. Aneurysms on FA that are seen as areas of capillary non-perfusion on OCTA are circled in blue.

    de Carlo et al. (2015)

    Disc photographs (A, C) and en face OCT angiograms (B, D) of the ONH in representative normal (A, B) and preperimetric glaucoma (PPG) subjects (C, D). Both examples are from left eyes. In (B) and (D) the solid circles indicate the whole discs, and the dash circles indicate the temporal ellipses. A dense microvascular network was visible on the OCT angiography of the normal disc (B). This network was greatly attenuated in the glaucomatous disc (D)

    Jia et al. (2012)

    Total (a) and temporal (b)optic nerve head ( ONH) acquisition in a normal patient. Total (c) and temporal (d) ONH acquisition in a glaucoma patient

    Lvque et al. (2016)

    In glaucoma the vascularization of the optic nerve head is greatly attenuated,

    This is not readily visible from the fundus photograph (see above)

    Prada et al. (2016)

    http://dx.doi.org/10.1167/iovs.15-18494http://dx.doi.org/10.1167/iovs.15-18694

  • Adaptive optics imaging

    http://dx.doi.org/10.1186/s40942-015-0005-8http://dx.doi.org/10.1364/BOE.3.003127http://dx.doi.org/10.1155/2016/6956717http://dx.doi.org/10.1016/j.survophthal.2015.10.004

  • Adaptive optics systems in practice

    Not many commercial systems available, mainly in university laboratories, butSee Imagine Eyes' http://www.imagine-eyes.com/product/rtx1/

    http://dx.doi.org/10.2147/OPTH.S64458http://dx.doi.org/10.1167/11.5.6http://dx.doi.org/10.1038/eye.2011.1

  • Adaptive optics Functional add-ons

    https://dx.doi.org/10.1364/BOE.3.000225

    https://doi.org/10.1364/BOE.6.003405

    https://doi.org/10.1364/BOE.7.001051http://dx.doi.org/10.1145/2857491.288858

    Integrate pupillometry for clinical assessment to the AO system as pupil tracking is useful for optimizing imaging quality as well

    doi:10.1371/journal.pone.0162015

    http://www.imagine-eyes.com/product/rtx1/

  • Multispectral Imaging

    http://dx.doi.org/10.1038/eye.2011.202

    Absorption spectra for the major absorbing elements of the eye. Note that some of the spectra change with relatively small changes in wavelength. Maximizing the differential visibility requires utilizing small spectral slices. Melanin is the dominant absorber beyond 600 nm.

    Zimmer et al. (2014)

    Zimmer et al. (2014)

    Zimmer et al. (2014)

    Zimmer et al. (2014) The aim of this project is to build and clinically test a reliable multi-spectral imaging device, that allows in vivo imaging of oxygen tension and -amyloid in human eyes. Maps showing the possible existence and distribution of

    -amyloid plaques will be obtained in glaucoma patients and possibly patients with (early) Alzheimerss disease.

    https://dx.doi.org/10.1364/BOE.3.000225https://doi.org/10.1364/BOE.6.003405https://doi.org/10.1364/BOE.7.001051http://dx.doi.org/10.1145/2857491.288858http://dx.doi.org/10.1371/journal.pone.0162015http://www.neuroptics.com/https://www.lfe.mw.tum.de/en/research/methods-and-lab-equipment/pupillometry/

  • OCT towards handheld devices

    http://dx.doi.org/10.1364/BOE.5.000293Cited by 5410.1038/nphoton.2016.141

    http://dx.doi.org/10.1364/OE.24.013365

    Here, we report the design and operation of a handheld probe that can perform both scanning laser ophthalmoscopy and optical coherence tomography of the parafoveal photoreceptor structure in infants and children without the need for adaptive optics. The probe, featuring a compact optical design weighing only 94 g, was able to quantify packing densities of parafoveal cone photoreceptors and visualize cross-sectional photoreceptor substructure in children with ages ranging from 14 months to 12 years.

    https://aran.library.nuigalway.ie/handle/10379/5481

    EU-funded Horizon 2020 project led by Wolfgang Drexler from the Medical University of Vienna is aiming to shrink the core technology to no more than the size of a coin, primarily to diagnose eye diseases including diabetic retinopathy and glaucoma. OCTCHIP (short for ophthalmic OCT on a chip, projectbegan at the start of the year 2016) and directly applied in the field of OCT for ophthalmology

    http://optics.org/news/7/6/19 | cordis.europa.eu/project/rcn/199593 | jeppix.eu

    http://dx.doi.org/10.1038/eye.2011.202http://retinatoday.com/2014/10/innovation-in-diagnostic-retinal-imaging-multispectral-imaginghttp://retinatoday.com/2014/10/innovation-in-diagnostic-retinal-imaging-multispectral-imaginghttp://retinatoday.com/2014/10/innovation-in-diagnostic-retinal-imaging-multispectral-imaginghttp://retinatoday.com/2014/10/innovation-in-diagnostic-retinal-imaging-multispectral-imaginghttp://ir.uiowa.edu/omia/2014_Proceedings/2014/8/

  • Functional biomarkers

    MICROPERIMETRY VISUAL FIELD

    Right eye of a 72-year-old man. Native en-face image (A) and reticular drusen (RDR) area highlighted (B). Interpolated test results for both scotopic (C) and photopic (D) microperimetry. Numerical values for scotopic (E) and photopic (F) microperimetry.

    Steinberg et al. (2015)The Cassini diagnostic device offers a suite of

    examinations including corneal topography, mesopic and photopic pupillometry, and color photography

    for diagnostic purposes. crstodayeurope.com

    Nissen et al (2014). Melanopsin-based pupillometry, differential post-illumination pupil response (PIPR) due to pathological changes on ganglion cell layer (GCL)

    Pupillometry (Pupillary Light Reflex)ophthalmologymanagement.com

    Mmultifocal ERG responses from the macular area of a patient with AMD. The responses of the fovea are reduced in amplitude. In the 3-D map it can be seen that the foveal area is flat, suggesting no cone activity, compared with the characteristic peak of responses in the normal retina.webvision.med.utah.edu

    http://dx.doi.org/10.1364/BOE.5.000293https://scholar.google.co.uk/scholar?cites=3100892705419344778&as_sdt=2005&sciodt=0,5&hl=enhttp://dx.doi.org/10.1038/nphoton.2016.141http://dx.doi.org/10.1364/OE.24.013365https://aran.library.nuigalway.ie/handle/10379/5481http://optics.org/news/7/1/23http://optics.org/news/7/6/19http://cordis.europa.eu/project/rcn/199593_en.htmlhttp://www.jeppix.eu/document_store/Presentatie_5_Dr._Jeroen_Kalkman.pdf

  • Clinical diagnosis current Fundus photographs, optical coherence tomography (OCT) images, thickness maps, and profiles of thickness of the circumpapillary retinal nerve fiber layer (cpRNFL) in the right eye of a 60-year-old woman with open-angle glaucoma and a mead deviation (MD) of 2.33 dB. Nukada et al. (2011)

    Conclusions

    Assessment of RNFL thickness with OCT was able to detect glaucomatous damage before the appearance of visual field defects on SAP. In many subjects, significantly large lead times were seen when applying OCT as an ancillary diagnostic tool.

    http://dx.doi.org/10.1016/j.ophtha.2015.06.015

    http://dx.doi.org/10.1001/jamaophthalmol.2015.0477http://crstodayeurope.com/2015/07/true-corneal-shape-analysis-with-the-cassini/http://dx.doi.org/10.3389/fneur.2014.00015http://www.ophthalmologymanagement.com/articleviewer.aspx?articleID=106960http://webvision.med.utah.edu/book/part-xii-cell-biology-of-retinal-degenerations/age-related-macular-degeneration-amd/

  • Visual field

    Patterns of early glaucomatous visual field loss and their evolution over time http://iovs.arvojournals.org/article.aspx?articleid=2333021

    http://dx.doi.org/10.1016/j.ophtha.2014.08.014

    http://dx.doi.org/10.1016/j.ophtha.2015.10.046http://dx.doi.org/10.1016/j.ophtha.2015.12.014

    http://dx.doi.org/10.1016/j.ajo.2015.12.006 http://dx.doi.org/10.1007/s12325-016-0333-6

    Humphrey HFA II-i Field Analyzer http://ibisvision.co.uk/

    http://dx.doi.org/10.1016/j.ophtha.2010.10.025http://dx.doi.org/10.1016/j.ophtha.2015.06.015

  • Additional biomarkers

    http://dx.doi.org/10.1016/j.ophtha.2015.11.009

    Conclusions

    We report macular thickness data derived from SD OCT images collected as part of the UKBB study and found novel associations among older age, ethnicity, BMI, smoking, and macular thickness.

    Correspondence: Praveen J. Patel, FRCOphth, MD(Res), Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London, EC1V2PD UK.

    http://dx.doi.org/10.1167/iovs.14-15278http://dx.doi.org/10.1016/j.arr.2016.05.013

    http://iovs.arvojournals.org/article.aspx?articleid=2333021http://dx.doi.org/10.1016/j.ophtha.2014.08.014http://dx.doi.org/10.1016/j.ophtha.2015.10.046http://dx.doi.org/10.1016/j.ophtha.2015.12.014http://dx.doi.org/10.1016/j.ajo.2015.12.006http://dx.doi.org/10.1007/s12325-016-0333-6http://www.zeiss.com/meditec/en_us/products---solutions/ophthalmology-optometry/glaucoma/diagnostics/perimetry/humphrey-hfa-ii-i.htmlhttp://ibisvision.co.uk/

  • Retina beyond retinal pathologies

    http://dx.doi.org/10.1016/j.neuroimage.2010.06.020

    http://dx.doi.org/10.4172/2161-0460.1000223

    http://dx.doi.org/10.1371/journal.pone.0085718

    http://dx.doi.org/10.1186/s40478-016-0346-z

    Affiliated with: UCL Institute of Ophthalmology, University College London

    retinalphysician.com

    http://dx.doi.org/10.1097/WCO.0b013e328334e99b http://dx.doi.org/10.1016/j.pscychresns.2011.08.011

    http://dx.doi.org/10.1016/j.ophtha.2015.11.009http://dx.doi.org/10.1167/iovs.14-15278http://dx.doi.org/10.1016/j.arr.2016.05.013

  • Imaging technique implications for Automatic diagnosis

    Garbage in Garbage out Fundus image captures very macro-level changes, and works with advanced pathologies, but how about detecting very early signs allowing very early interventions as well?

    Cannot analyze something that is not visible in the image

    The retina appears normal in the fundus photograph, but extensive loss of outer segments is revealed in the superimposed montage of AOSLO images. Dropout is visible everywhere in the AOSLO montage, but increases sharply at 6.5 (arrow) from the optic disc coinciding with the border of the subjects enlarged blind spot. Arrow indicates blood vessel marked inFig. 2. F = fovea. For a higher resolution image, see Fig. S2. Red boxed region is shown inFig. 4, green boxed region inFig. 5a,b. Horton et al. (2015)

    Towards multimodal image analysis Try to image all relevant pathological featuresand multivariate analysis incorporating functional measures,and even some more static variables from electronic health records (EHR)

    http://dx.doi.org/10.1016/j.neuroimage.2010.06.020http://dx.doi.org/10.4172/2161-0460.1000223http://dx.doi.org/10.1371/journal.pone.0085718http://dx.doi.org/10.1186/s40478-016-0346-zhttp://www.retinalphysician.com/articleviewer.aspx?articleID=109884http://dx.doi.org/10.1097/WCO.0b013e328334e99bhttp://dx.doi.org/10.1016/j.pscychresns.2011.08.011http://dx.doi.org/10.1016/j.neuroimage.2010.06.020

  • Imaging technique: Mobile phone Going for quantity rather than quality

    Instead of high-end imaging solutions, one could go for the smartphone-based solution on the side and trying to gather as much as possible low-quality training data which then would be helpful in developing nations to allow easily accessible healthcare.

    For some details and startup, see the following slideshow: http://www.slideshare.net/PetteriTeikariPhD/smartphonepowered-ophthalmic-diagnostics

    eyenetra.com

    http://www.nature.com/articles/srep12364/figures/2http://www.nature.com/articles/srep12364#s1http://www.nature.com/articles/srep12364/figures/4http://www.nature.com/articles/srep12364/figures/5http://dx.doi.org/10.1038/srep12364http://ophtbook.com/1888/multimodal-retinal-imaging.htmlhttps://www.amazon.com/dp/3642402992?_encoding=UTF8&*Version*=1&*entries*=0&showDetailTechData=1#technical-data

  • Mobile Ecosystems

    Apple HealthKithttps://developer.apple.com/healthkit/

    theophthalmologist.com/issues/0716

    Despite the availability of multiple health data aggregation platforms such as Apples HealthKit, Microsofts Health, Samsungs S Health, Google Fit, and Qualcomm Health, the public will need to be convinced that such platforms provide long-term security of health information. In the rapidly developing business opportunities represented by the worlds of ehealth and mhealth, the blurring of the lines between consumer goods and medical devices will be further tested by the consumer goods industry hoping not to come under the scrutiny of the FDA.

    meddeviceonline.com

    http://www.medscape.com/viewarticle/852779

    doi:10.5811%2Fwestjem.2015.12.28781

    imedicalapps.com/2016/03/ohiohealth-epic-apple-health/http://www.wareable.com/sport/google-fit-vs-apple-health

    http://www.slideshare.net/PetteriTeikariPhD/smartphonepowered-ophthalmic-diagnosticshttps://eyenetra.com/product-netra.htmlhttps://youtu.be/PvT9tD3NZvo

  • IMAGE QUALITY

    https://developer.apple.com/healthkit/https://theophthalmologist.com/issues/0716/personal-privacy-in-the-age-of-big-data/http://www.meddeviceonline.com/doc/bausch-lomb-ibm-develop-app-to-assist-cataract-surgeons-0001http://www.medscape.com/viewarticle/852779http://dx.doi.org/10.5811/westjem.2015.12.28781http://www.imedicalapps.com/2016/03/ohiohealth-epic-apple-health/http://www.wareable.com/sport/google-fit-vs-apple-health

  • GENERIC IMAGE QUALITY

    http://www.imatest.com/support/image-quality/

    Imatest Software Suiteused commonly in practice that was built using Matlab

  • Retinal Image QualitySee nice literature review on Dias' Master's thesis.

    Color

    Focus

    Illumination

    Contrast

    + Camera artifacts+ Noise (~SNR)

    COLOR

    FOCUS

    CONTRAST

    ILLUMINATION

    CAMERA Artifacts

  • Domain-specific IMAGE QUALITY: Fundus

    Usher et al. (2003)

    Maberley et al. (2004)

    Fleming et al. (2006)

    http://dx.doi.org/10.1016/j.compbiomed.2016.01.027

    Wang et al. (2016)

    http://www.imatest.com/support/image-quality/

  • Domain-specific IMAGE QUALITY: OCT #1

    Nawas et al. (2016)

    http://dx.doi.org/10.1136/bjo.2004.097022

    http://dx.doi.org/10.1080/02713683.2016.1179332http://dx.doi.org/10.1111/opo.12289

    https://estudogeral.sib.uc.pt/bitstream/10316/25154/1/Tese%20-%20Jo%C3%A3o%20Dias.pdf

  • Domain-specific IMAGE QUALITY: OCT #2

    The left image is blurred due to poor focusing. This results in increased noise and loss of transversal resolution in the OCT image on the right.

    Signal: The signal strength for this image is 13 dB which is lower than the limit of 15 dB. This results in a more noisy OCT image with a lot of speckling.

    Decentration: The ring scan is not correctly centred as can be observed in the left image. The edge of the optic nerve head crosses more than two circles. Therefore the ringscan is rejected.

    Algorithm failure: The red line in the OCT image right is not clearly at the border of the RNFL. The location corresponds to inferior of the ONH.

    Retinal pathology: There is severe peri-papillary atrophy. It can be seen that this affects the RNFL enormously.

    Illumination: The OCT scan here is badly illuminated. Also here this results in speckling and decrease of resolution.

    Beam placement: the laser beam is not placed centrally. This can be seen at the outer nuclear layer (ONL). The two arrows point to two regions of the ONL. The left arrow points to a light gray region whereas the other points to a darker gray region. If there is too much difference in colour of the ONL itself a scan is rejected.

    The OSCAR-IB Consensus Criteria for Retinal OCT Quality Assessmenthttp://dx.doi.org/10.1371/journal.pone.0034823

    http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.693.307&rep=rep1&type=pdf#page=91http://dx.doi.org/10.1080/09286580490514496http://dx.doi.org/10.1167/iovs.05-1155http://dx.doi.org/10.1016/j.compbiomed.2016.01.027http://dx.doi.org/10.1109/TMI.2015.2506902

  • OCT Device Comparison

    Comparison of images obtained with 3 different spectral-domain OCT devices (Topcon 3D OCT-1000, Zeiss Cirrus, Heidelberg Spectralis) of both eyes of the same patient with early AMD changes taken just minutes apart.

    Comparison of images obtained with 3 different spectral-domain OCTs (Heidelberg Spectralis, Optovue RTVue, Topcon 3D OCT-1000) and with 1 time-domain OCT (Zeiss Stratus) of both eyes of the same patient with a history of central serous chorioretinopathy in both eyes.

    The same set of images as shownabove in pseudo color.

    Comparison of horizontal B-scan images and 3D images of a patient with neovascular age-related macular degeneration obtained with Heidelberg Spectralis, Zeiss Cirrus, Topcon 3D OCT-1000.

    Spectral-domain Optical Coherence Tomography: A Real-world ComparisonIRENE A. BARBAZETTO, MD SANDRINE A. ZWEIFEL, MD MICHAEL ENGELBERT, MD, PhD K. BAILEY FREUND, MD JASON S. SLAKTER, MD

    retinalphysician.com

    e.g. from Xie et al. (2015): Hyper-class Augmented and Regularized Deep Learning forFine-grained Image Classification

    How much inter-device variance, are the images more or less the same between images in CNN-sense, and the inter-individual variance dominate?

    http://dx.doi.org/10.1016/j.cmpb.2016.03.011http://dx.doi.org/10.1136/bjo.2004.097022http://dx.doi.org/10.1080/02713683.2016.1179332http://dx.doi.org/10.1111/opo.12289

  • OCT IMAGE Quality issues & ARTIFACTShttp://dx.doi.org/10.1155%2F2015%2F746150

    Blink artifact Smudged Lens Floaters overOptic disk

    Patient-Dependent Factors

    Operator-Dependent Factors

    Device-Dependent Factors

    Pupil Size, Dry Eye, and CataractFloaters and Other Vitreous OpacitiesEpiretinal MembranesBlinksMotion ArtifactsSignal Strength

    OCT Lens Opacities Incorrect Axial Alignment of the OCT image

    Inaccurate Optic Disc Margins DelineationInaccurate Retinal Nerve Fiber Layer Segmentation

    http://dx.doi.org/10.1371/journal.pone.0034823

  • OCT factors affecting quality

    RNFLT: retinal nerve fiber layer thickness.

    Note: case examples obtained using Cirrus HD-OCT (Carl Zeiss Meditec, Dublin, CA; software version 5.0.0.326). The content of this table may not be applicable to different Cirrus HD-OCT models or to other Spectral-domain OCT devices.

    http://dx.doi.org/10.1155/2015/746150

    http://www.retinalphysician.com/articleviewer.aspx?articleID=103064http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Xie_Hyper-Class_Augmented_and_2015_CVPR_paper.pdf

  • OCT Image quality issues & ARTIFACTS #2

    http://dx.doi.org/10.1016/j.ophtha.2009.10.029

    Recent studies demonstrated a lower frequency of artifacts in SD-OCT instruments compared with Stratus TD-OCT.2,3Interestingly, the authors identified several types of clinically important artifacts generated by SD-OCT, including those previously seen in TD-OCT and those new with SD-OCT.1

    We have recently performed a similar analysis by comparing TD-OCT and SD-OCT (Querques G, unpublished data, June 2010), and our findings completely agree with those reported by the authors. Here, we would like to focus on new artifacts seen on SD-OCT. Given that the Fourier transform of OCT information is Hermitian,4a real image is always accompanied by its inverted image.5This feature of SD-OCT may be responsible for image artifacts that could be mistakenly interpreted for retinal lesions. This is especially true if scan acquisition is performed by a technician, and then the physician analyzes the printout for diagnostic evaluation.

    It was recently the case, when we faced with an unusual printout, showing a small, round retinal lesion located within the outer plexiform layer, which presented a shadowing effect not only in the deeper layers but even in the superficial layers. This was evident with both Cirrus HD-OCT and Spectralis HRA-OCT. Interestingly, in some other printouts, the lesion was still located within the outer plexiform layer, even though no clear shadowing effect was evident.

    When we returned to this patient by personally performing the SD-OCT examination, we realized that the patient presented asteroid bodies in the vitreous, which due to the Fourier transformation of OCT information (the inverted image always accompanying the real image), were responsible for the pseudo retinal lesions

    Artifacts represent a major concern of every imaging modality. Although SD-OCT marks a significant advance in the ability to image the retina, artifacts may still influence clinical decisions. Recognizing the limitations of OCT, as well as the new and old misleading image artifacts would help the physicians in every day clinical practice.

    COMMENTARY by Querques et al. 2010

    Purpose

    To report the frequency of optical coherence tomography (OCT) scan artifacts and to compare macular thickness measurements, interscan reproducibility, and interdevice agreeability across 3 spectral-domain (SD) OCT (also known as Fourier domain; Cirrus HD-OCT, RTVue-100, and Topcon 3D-OCT 1000) devices and 1 time-domain (TD) OCT (Stratus OCT) device.

    Results

    Time-domain OCT scans contained a significantly higher percentage of clinically significant improper central foveal thickness (IFT) after manual correction (11- m change or more) compared with SD OCT scans. Cirrus HD-OCT had a significantly lower percentage of clinically significant IFT (11.1%) compared with the other SD OCT devices (Topcon 3D, 20.4%; Topcon Radial, 29.6%; RTVue (E)MM5, 42.6%; RTVue MM6, 24.1%; P = 0.001). All 3 SD OCT devices had central foveal subfield thicknesses that were significantly more than that of TD OCT after manual correction (P

  • OCT Image quality issues & ARTIFACTS #3

    http://dx.doi.org/10.1016/j.ophtha.2006.06.059

    Conclusions

    Retinal thickness and retinal height could be underestimated in patients with central serous chorioretinopathy (CSC) or neovascular age-related macular degeneration (AMD) after retinal thickness analysis in Stratus OCT when either automatic measurements or manual caliperassisted measurements are performed on the analyzed images. We recommend exporting the original scanned OCT images for retinal thickness and retinal height measurement in patients with CSC or neovascular AMD.

    http://dx.doi.org/10.1212/WNL.0000000000002774

    Objective:To develop consensus recommendations for reporting of quantitative optical coherence tomography (OCT) study results.

    Methods:A panel of experienced OCT researchers (including 11 neurologists, 2 ophthalmologists, and 2 neuroscientists) discussed requirements for performing and reporting quantitative analyses of retinal morphology and developed a list of initial recommendations based on experience and previous studies. The list of recommendations was subsequently revised during several meetings of the coordinating group.

    Results:We provide a 9-point checklist encompassing aspects deemed relevant when reporting quantitative OCT studies. The areas covered are study protocol, acquisition device, acquisition settings, scanning protocol, funduscopic imaging, postacquisition data selection, postacquisition data analysis, recommended nomenclature, and statistical analysis.

    Conclusions:The Advised Protocol for OCT Study Terminology and Elements recommendations include core items to standardize and improve quality of reporting in quantitative OCT studies. The recommendations will make reporting of quantitative OCT studies more consistent and in line with existing standards for reporting research in other biomedical areas. The recommendations originated from expert consensus and thus represent Class IV evidence. They will need to be regularly adjusted according to new insights and practices.

    http://dx.doi.org/10.1371/journal.pone.0137316

    Methods: Studies that used intra-retinal layer segmentation of macular OCT scans in patients with MS were retrieved from PubMed. To investigate the repeatability of previously applied layer estimation approaches, we generated datasets of repeating measurements of 15 healthy subjects and 13 multiple sclerosis patients using two OCT devices (Cirrus HD-OCT and Spectralis SD-OCT). We calculated each thickness estimate in each repeated session and analyzed repeatability using intra-class correlation coefficients and coefficients of repeatability.

    Results: We identified 27 articles, eleven of them used the Spectralis SD-OCT, nine Cirrus HD-OCT, two studies used both devices and two studies applied RTVue-100. Topcon OCT-1000, Stratus OCT and a research device were used in one study each. In the studies that used the Spectralis, ten different thickness estimates were identified, while thickness estimates of the Cirrus OCT were based on two different scan settings. In the simulation dataset, thickness estimates averaging larger areas showed an excellent repeatability for all retinal layers except the outer plexiform layer (OPL).

    Conclusions: Given the good reliability, the thickness estimate of the 6mm-diameter area around the fovea should be favored when OCT is used in clinical research. Assessment of the OPL was weak in general and needs further investigation before OPL thickness can be used as a reliable parameter.

    http://dx.doi.org/10.1155/2015/746150

  • OCT repeatability

    Explanation of different thickness estimates used for the simulation of repeatability. The red areas or points on the fundus images indicate the values that were averaged to generate the layer thickness estimates.http://dx.doi.org/10.1371/journal.pone.0137316

    Differences in the outer plexiform layer (OPL) in repeated OCT measurements

    The values in the grid are the mean OPL thickness differences for each sector. The right graph maps the OPL thickness of the B-scans in (A) (green line) and (B) (blue line), respectively. The red line indicates the difference between the repeated B-scans

    http://dx.doi.org/10.1016/j.ophtha.2009.10.029javascript:void(0);javascript:void(0);javascript:void(0);javascript:void(0);javascript:void(0);http://dx.doi.org/10.1016/j.ophtha.2010.10.019http://dx.doi.org/10.1016/j.ophtha.2009.03.034

  • OCT Inter-Device variability and intra-device reproducibility

    http://dx.doi.org/10.1136/bjophthalmol-2014-305573

    Methods:29 eyes were imaged prospectively with Spectralis (Sp), Cirrus (Ci), 3D-OCT 2000 (3D) and RS-3000 (RS) OCTs. Conclusions: By comparison of identical regions, substantial differences were detected between the tested OCT devices regarding technical accuracy and clinical impact. Spectralis showed lowest error incidence but highest error impact.

    Purpose: To evaluate and compare the frequency, type and cause of imaging artifacts incurred when using swept-source optical coherence tomography (SS OCT) and Cirrus HD OCT in the same patients on the same day.

    Conclusions: There was no significant difference in the frequency, type and cause of artifacts between SS OCT and Cirrus HD OCT. Artifacts in OCT can influence the interpretation of OCT results. In particular, ERM around the optic disc could contribute to OCT artifacts and should be considered in glaucoma diagnosis or during patient follow-up using OCT.

    http://dx.doi.org/10.3109/02713683.2015.1075219

    http://dx.doi.org/10.1167/tvst.4.1.5

    Conclusions::RTVue thickness reproducibility appears similar to Stratus. Conversion equations to transform RTVue measurements to Stratus-equivalent values within 10% of the observed Stratus RT are feasible. CST changes greater than 10% when using the same machine or 20% when switching from Stratus to RTVue, after conversion to Stratus equivalents, are likely due to a true change beyond measurement error

    Translational Relevance::Conversion equations to translate central retinal thickness measurements between OCT instruments is critical to clinical trials.

    Bland-Altman plots of the differences between values on machines (RTVue minus Stratus) versus the means of the automated Stratus testretest values, for each measurement. CST - Central subfield thickness results

    More on Bland-Altman, see for example: McAlinden et al. (2011): Statistical methods for conducting agreement (comparison of clinical tests) and precision (repeatability or reproducibility) studies in optometry and ophthalmology Cited by 108

    http://dx.doi.org/10.1016/j.ophtha.2006.06.059http://dx.doi.org/10.1212/WNL.0000000000002774http://dx.doi.org/10.1371/journal.pone.0137316

  • OCT IMAGE Quality issues & ARTIFACTS #4

    http://dx.doi.org/10.1371/journal.pone.0034823

    The total number of rejected OCT scans (prospective validation set of 159 OCT scans from Amsterdam, San Francisco and Calgary.) from the pooled prospective validation set was high (42%43%) in each of the readers

    http://dx.doi.org/10.1371/journal.pone.0137316

  • OCT IMAGE Quality Summary Based on the results of the OSCAR-IB study by

    Tewarie et al. (2012), we can see that almost half of the OCT images were rejected!

    This poses challenges for the deep learning framework for the classification as the bad quality samples can be then misclassified.

    Two mutually non-exclusive approaches:

    Improve the image quality of the scans.

    Improve the hardware itself Make the scanning more intelligent with software without

    having to change underlying hardware

    Improve the automated algorithms distinguishing good quality scans from bad quality.

    http://dx.doi.org/10.1136/bjophthalmol-2014-305573http://dx.doi.org/10.3109/02713683.2015.1075219http://dx.doi.org/10.1167/tvst.4.1.5http://dx.doi.org/10.1111/j.1475-1313.2011.00851.xhttps://scholar.google.ca/scholar?cites=8849635646054486596&as_sdt=2005&sciodt=0,5&hl=en

  • OCT-SPECIFIC CORRECTIONS #1

    http://dx.doi.org/10.1109/ISBI.2016.749324

    The example of OCT images of the nerve head (below row) affected by motion artifact (top row). (a) En face fundus projection (b) B-scan.

    http://dx.doi.org/10.1088/2057-1976/2/3/035012

    http://dx.doi.org/10.1016/j.ijleo.2016.05.088

    (a-1)(a-3) are cartoon partu, texture partvand speckle noise partwdecomposed ofFig. 1by variational image decomposition model TV-G-Curvelet; (b-1)(b-3) are cartoon partu, texture partvand speckle noise partwdecomposed ofFig. 1by variational image decomposition model TV-Hilbert-Curvelet.

    http://dx.doi.org/10.1371/journal.pone.0034823

  • OCT-SPECIFIC CORRECTIONS #2

    Optical Coherence Tomography (OCT) is an emerging technique in the field of biomedical imaging, with applications in ophthalmology, dermatology, coronary imaging etc. OCT images usually suffer from a granular pattern, called speckle noise, which restricts the process of interpretation. Therefore the need for speckle noise reduction techniques is of high importance. To the best of our knowledge, use of Independent Component Analysis (ICA) techniques has never been explored for speckle reduction of OCT images. Here, a comparative study of several ICA techniques (InfoMax, JADE, FastICA and SOBI) is provided for noise reduction of retinal OCT images. Having multiple B-scans of the same location, the eye movements are compensated using a rigid registration technique. Then, different ICA techniques are applied to the aggregated set of B-scans for extracting the noise-free image. Signal-to-Noise-Ratio (SNR), Contrast-to-Noise-Ratio (CNR) and Equivalent-Number-of-Looks (ENL), as well as analysis on the computational complexity of the methods, are considered as metrics for comparison. The results show that use of ICA can be beneficial, especially in case of having fewer number of B-scans.

    Overall, Second Order Blind Identification (SOBI) is the best among the ICA techniques considered here in terms of performance based on SNR, CNR and ENL, while needing less computational power.

    http://dx.doi.org/10.1007/978-3-540-77550-8_13

    http://dx.doi.org/10.1371/journal.pone.0034823

  • OCT Layer segmentation #1

    http://dx.doi.org/10.1364%2FBOE.5.000348

    http://dx.doi.org/10.1117/1.JBO.21.7.076015

    http://dx.doi.org/10.1364/AO.55.000454

    http://dx.doi.org/10.1142/S1793545816500085

    http://dx.doi.org/10.1364/BOE.7.002888

    http://dx.doi.org/10.1109/ISBI.2016.749324http://dx.doi.org/10.1088/2057-1976/2/3/035012http://dx.doi.org/10.1016/j.ijleo.2016.05.088http://www.sciencedirect.com.libproxy.aalto.fi/science/article/pii/S003040261630537X#fig0005http://www.sciencedirect.com.libproxy.aalto.fi/science/article/pii/S003040261630537X#fig0005

  • OCT Layer segmentation #2

    http://dx.doi.org/10.1371/journal.pone.0162001

    https://www.researchgate.net/profile/Adel_Belouchrani/publication/2699542_Second_Order_Blind_Separation_of_Temporally_Correlated_Sources/links/00463517ab3e0aed06000000.pdfhttp://dx.doi.org/10.1007/978-3-540-77550-8_13

  • IMAGE QUALITY ASSESSMENT With enough manual labels we could train again a deep learning

    network to do the quality classification for us

    For natural imageshttp://dx.doi.org/10.1109/TNNLS.2014.2336852 For natural images

    http://dx.doi.org/10.1016/j.image.2015.10.005

    http://dx.doi.org/10.1016/j.cmpb.2016.03.011

    For natural imageshttp://arxiv.org/abs/1602.05531

    What about using generative adversial networks (GAN) as well for training for proper image quality?

    http://dx.doi.org/10.1364%2FBOE.5.000348http://dx.doi.org/10.1117/1.JBO.21.7.076015http://dx.doi.org/10.1364/AO.55.000454http://dx.doi.org/10.1142/S1793545816500085http://dx.doi.org/10.1364/BOE.7.002888

  • AI-enhanced RETINAL IMAGING

    http://dx.doi.org/10.1371/journal.pone.0162001

  • OCT Devices already have GPUS Increase of use GPU throughout the OCT computation

    pipeline.

    More operations in less time compared to CPU computations with many algorithms.

    GPU computation allows one to embed artificial intelligence to the device itself

    e.g. Moptim Mocean 3000

    http://dx.doi.org/10.1109/TNNLS.2014.2336852http://dx.doi.org/10.1016/j.image.2015.10.005http://dx.doi.org/10.1016/j.cmpb.2016.03.011http://arxiv.org/abs/1602.05531https://scholar.google.co.uk/scholar?as_ylo=2012&q=Generative+adversarial+nets&hl=en&as_sdt=0,5

  • OCT or custom FPGA boards

    http://www.alazartech.com/landing/oct-news-2016-09

    Completeon-FPGA FFTsolution that includes:

    User programmable dispersion compensation function

    User programmable windowing

    Log calculation

    FFT magnitude output in floating point or integer format

    Special"Raw + FFT"mode that allows users to acquire both time domain and FFT data

    This can be very useful during the validation process

  • GPU interventional OCT

    4D Optical Coherence Tomography ImagingDemo of GPU-based real-time 4D OCT technology, providing comprehensive spatial view of micro-manipulation region with accurate depth perception. Image reconstruction performed by NVIDIA GTX 580 and volume rendering by NVIDIA GTS 450. The images are volume rendered from the same 3D data set. Imaging speed is 5 volumes per second. Each volume has 2561001024 voxels, corresponding to a physical volume of 3.5mm3.5mm3mm.

    http://www.nvidia.co.uk

    Real-time 4D signal processing and visualization using graphics processing unit on a regular nonlinear-k Fourier-domain OCT system

    by K Zhang - 2010 - Cited by 161 - Related articles

    http://dx.doi.org/10.1167/iovs.16-19277

    repository.cmu.edu

    http://dx.doi.org/10.3807/JOSK.2013.17.1.068

    Flowchart of the computation and image display of the hybrid CPU/GPU processing scheme in the program.

    http://dx.doi.org/10.1364/OE.20.014797

  • Embedded decision system Next Generation Upgrade from Quadro 600 Titan X / GTX970 depending on the power needed per price.

    Accelerating traditional signal processing operations, and the future artifical intelligence analysis

    AI does not have to be limited to analysis for pathology! Use AI to find Regions of Interest (ROI), and do denser sampling

    from possible pathological areas of retina.

    More data from relevant regions better analysis accuracy. AI to optimize image quality, e.g.

    Super-resolution from multiple scans within device

    Multiple scans to get rid of artifacts

    Train AI for image denoising / deconvolution

    Make the analysis quality less reliant on the operator

    Systems engineering approachOptimize the whole process from imaging to analysis jointly rather separately

    MOptim MOcean 3000

    http://www.moptim.com/uploads/image/20151020/1445340064.pdfhttp://dx.doi.org/10.1117/1.3275463http://dx.doi.org/10.1117/1.JBO.18.2.026002http://dx.doi.org/10.1364/BOE.7.001815http://dx.doi.org/10.1117/12.2211072http://dx.doi.org/10.1371/journal.pone.0124192http://dx.doi.org/10.1016/j.media.2014.10.012http://dx.doi.org/10.1016/j.media.2013.05.008http://stackoverflow.com/questions/34715055/choosing-between-geforce-or-quadro-gpus-to-do-machine-learning-via-tensorflowhttp://www.videocardbenchmark.net/compare.php?cmp[]=2449&cmp[]=3521&cmp[]=3295

  • UPgrading EXISTING SYSTEMS Add value to existing install base by providing an AI module that in essence is a

    Raspberry Pi/Arduino-style minicomputer running an embedded GPU accelerator NVIDIA Jetson

    http://www.alazartech.com/landing/oct-news-2016-09http://www.alazartech.com/Technology/On-FPGA-FFThttp://www.octnews.org/

  • Smart AI Acquisition Examples

    zeiss.com Cirrus Smart HD

    zeiss.com

    http://www.nvidia.co.uk/content/cuda/spotlights/tools-for-microsurgeons.htmlhttp://www.ncbi.nlm.nih.gov/pmc/articles/PMC2897754/http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2897754/http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2897754/https://scholar.google.co.uk/scholar?client=ubuntu&espv=2&biw=1083&bih=862&dpr=1&um=1&ie=UTF-8&lr&cites=3107034884356858480https://scholar.google.co.uk/scholar?client=ubuntu&espv=2&biw=1083&bih=862&dpr=1&um=1&ie=UTF-8&lr&q=related:cGLW-sNnHit61M:scholar.google.com/http://dx.doi.org/10.1167/iovs.16-19277http://repository.cmu.edu/cgi/viewcontent.cgi?article=2199&context=roboticshttp://dx.doi.org/10.3807/JOSK.2013.17.1.068http://dx.doi.org/10.1364/OE.20.014797

  • Image quality improvement Super-resolutionSuper-resolution from retinalfundus videos (Khler et al. 2014)

    Improveddynamic

    range

    https://www5.informatik.uni-erlangen.de/Forschung/Publikationen/2016/Kohler16-SRI-talk.pdf

    http://www.moptim.com/en/product.php?cid=27

  • Image quality improvement 3D Reconstruction

    Multiple GPUs, Threads and Reconstruction VolumesMultiple GPUs can be used by Kinect Fusion, however, each must have its own reconstruction volume(s), as an individual volume can only exist on one GPU. It is recommended your application is multithreaded for this and each thread specifies a device index when calling NuiFusionCreateReconstruction.

    Multiple volumes can also exist on the same GPU just create multiple instances of INuiFusionReconstruction. Individual volumes can also be used in multi-threaded environments, however, note that the volume related functions will block if a call is in progress from another thread.

    https://msdn.microsoft.com/en-us/library/dn188670.aspx

    motivation from Kinect and microscopy

    http://www.label.mips.uha.fr/fichiers/articles/bailleul12spie.pdf

    http://dx.doi.org/10.1364/OE.24.011839

    http://www.nvidia.co.uk/object/jetson-tk1-embedded-dev-kit-uk.htmlhttp://elinux.org/Jetson_TK1http://www.nvidia.co.uk/object/jetson-embedded-systems-uk.html

  • 3D Reconstruction for OCT2D example:

    "The image reconstructions and super-resolution processing can be further accelerated by paralleled computing with graphics processing units (GPU), which can potentially improve the applicability of the PSR method illustrated herein." -He et al. (2016)

    3D example:

    "We use a parallelized and hardware accelerated SVR reconstruction method. A full field of view reconstruction of 8 input stacks at 288 288 100 voxels takes up to 1 2 hours using a small patch size (e.g., a = 32, = 16) on a multi GPU System (Intel Xeon E5-2630 2.60GHz system with 16 GB RAM, an Nvidia Tesla K40 (released back in 2013, 1.43 Tflops in double precision) and a Geforce 780). Using large (k = 0.1) overlapping super-pixels reduces this time to approximately 45min for a full field-of-view volume, while maintaining a comparable result to the best configuration of overlapping square patches." -Kainz et al. (2015)

    http://www.zeiss.com/content/dam/Meditec/downloads/pdf/CIRRUS%20HD-OCT/en-31-010-0014i-cirrushdoct-8.0-ds-ous.pdfhttp://www.zeiss.com/meditec/en_de/products---solutions/ophthalmology-optometry/glaucoma/diagnostics/oct/oct-optical-coherence-tomography/cirrus-hd-oct.html

  • Image Quality Conclusion As shown in previous slides, almost half of the OCT images were discarded due to bad image quality (OSCAR-IB

    study by Tewarie et al., 2012)

    Wasteful to have the operator scan the patient and end up with suboptimal quality image. Better to take an automated approach with multi-exposure scans and making scan quality operator-

    independent in the end. Take inspiration from computational photography and 'smart imaging'

    Khler et al. (2013)

    MM10, October 2529, 2010, Firenze, Italy, graphics.stanford.edu

    http://prolost.com/blog/lightl16https://light.co/

    ee.surrey.ac.uk

    doi:10.1109/ICASSP.2012.6288078

    Manuscripts are solicited to address a wide range of topics on computer vision techniques and applications focusing on computational photography tasks, including but not limited to the following:

    Advanced image processing Computational cameras Computational illumination Computational optics High-performance imaging Multiple images and camera arrays Sensor and illumination hardware Scientific imaging and videography Organizing and exploiting

    photo/video collections Vision for graphics Graphics for vision

    wikicfp.com

    http://dx.doi.org/10.1007/978-3-319-10404-1_81https://www5.informatik.uni-erlangen.de/Forschung/Publikationen/2016/Kohler16-SRI-talk.pdfhttps://www5.informatik.uni-erlangen.de/Forschung/Publikationen/2016/Kohler16-SRI-talk.pdf

  • Data Engineering Data wrangling before analysis

    https://msdn.microsoft.com/en-us/library/dn188670.aspxhttp://www.label.mips.uha.fr/fichiers/articles/bailleul12spie.pdfhttp://dx.doi.org/10.1364/OE.24.011839

  • Data engineering vs. data science PROBLEM: Datasets come in various formats often collected

    by clinicians with little understanding of the data analysis steps. Try to develop a pre-processing pipeline that takes several

    different data formats and can parse them into standardized (internal standard compatible with TensorFlow and similar libraries) HDF5 dataformat https://en.wikipedia.org/wiki/Hierarchical_Data_Format

    HDF allows to store image data mixed with metadata for example, and can be read in various environments, and can be further converted to other databases relatively easily.

    If HDF5 proves to be inefficient, we can batch convert all the databases to new format If desired.

    HDF5 common in deep learning, Fuel for example uses HDF5.

    https://www.researchgate.net/profile/Subhamoy_Mandal/publication/283482681_Improving_Optoacoustic_Image_Quality_via_Geometric_Pixel_Super-Resolution_Approach/links/5639d62308aed5314d231e72.pdfhttps://www.researchgate.net/profile/Bernhard_Kainz2/publication/282124358_Flexible_Reconstruction_and_Correction_of_Unpredictable_Motion_from_Stacks_of_2D_Images/links/5603d94508ae4accfbb8cbcd.pdf

  • ETL (Extract, transform and load)

    A Typical Data Science Department

    Most companies structure their data science departments into 3 groups:

    Data scientists: the folks who are better engineers than statisticians and better statisticians than engineers. Aka, the thinkers.

    Data engineers: these are the folks who build pipelines that feed data scientists with data and take the ideas from the data scientists and implement them. Aka, the doers.

    Infrastructure engineers: these are the folks who maintain the Hadoop cluster / big data infrastructure. Aka, the plumbers.

    https://cran.r-project.org/web/packages/h5/index.html

    http://docs.h5py.org/en/latest/build.htmlhttp://www.kdnuggets.com/2016/03/engineers-shouldnt-write-etl.html

    http://dx.doi.org/10.1371/journal.pone.0034823http://www5.informatik.uni-erlangen.de/Forschung/Publikationen/2013/Koehler13-ANQ.pdfhttp://graphics.stanford.edu/~shpark7/projects/hdr_gelfand_mm10.pdfhttp://prolost.com/blog/lightl16https://light.co/http://www.ee.surrey.ac.uk/CVSSP/Publications/papers/Schubert-WACV-2009.pdfhttp://dx.doi.org/10.1109/ICASSP.2012.6288078http://www.wikicfp.com/cfp/servlet/event.showcfp?eventid=55679

  • PROCESSING FUNNEL

    DATA ENGINEERING DATA SCIENCE

  • Interoperability

    https://youtu.be/0E121gukglE?t=26m36s

  • Retinal Data Sources

    https://en.wikipedia.org/wiki/Hierarchical_Data_Formathttps://github.com/mila-udem/fuel

  • OPEN-SOURCE DATA SOURCES

    Allam et al. (2015)

    https://cran.r-project.org/web/packages/h5/index.htmlhttp://docs.h5py.org/en/latest/build.htmlhttp://www.kdnuggets.com/2016/03/engineers-shouldnt-write-etl.html

  • Proprietary formatS Vendor-specific OCT

    zeiss.com

    From Huang et al. (2013):Scans were obtained with certified photographers to minimize the OCT data acquisition artifacts [15], [20]. The data samples were saved in the Heidelberg proprietary .e2e format. They were exported from a Heidelberg Heyex review software (version 5.1) in .vol format and converted to the DICOM (Digital Imaging and Communication in Medicine) [21] OPT (ophthalmic tomography) format using a custom application built in MATLAB.

    These plugins interpret raw binary files exported from Heidelberg Spectralis Viewing Software. They successfully import both 8-bit SLO and 32-bit SD-OCT images, retaining pixel scale (optical and SD-OCT), segmentation data, and B-scan position relative to the SLO image (included in v1.1+). In addition to single B-scan SD-OCT images, the plug-in also opens multiple B-scan SD-OCT images as a stack, enabling 3-D reconstruction, analysis, and modeling. The plug-in is compatible with Spectralis Viewing Module exporting raw data in HSF-OCT-### format.Compatability has been tested with HSF-OCT-101, 102, and 103http://dx.doi.org/10.1016/j.exer.2010.10.009

    Heidelberg Engineering Spectralis OCT RAW data (.vol ending): Circular scans and Optic Nerve Head centered volumes are supported

    www5.cs.fau.de .. octseg/File format? Ease of reading?nidek-intl.com

    No Cube export

    moptim.com

    optos.com

    optovue.com

    topconmedical.com

    File formats? Ease of reading?For these vendors!

  • Proprietary Open-source data formats

    OpenEyes is a collaborative, open source, project led by Moorfields Eye Hospital. The goal is to produce a framework which will allow the rapid, and continuous development of electronic patient records (EPR) with contributions from Hospitals, Institutions, Academic departments, Companies, and Individuals.

    https://github.com/openeyes/OpenEyes

    https://youtu.be/0E121gukglE?t=26m36s

  • Proprietary Bitbucket C++ Project

    https://bitbucket.org/uocte/uocte/wiki/Home

    uocte/Heidelberg File FormatViewHistory

    Because no specification of this file format was available for the development of uocte, the file format was reverse engineered for interoperability. The information on this page therefore is incomplete and may be incorrect. It only serves to document which parts of the data are interpreted by uocte and which assumptions it makes concerning interpretation.

    Heidelberg data is stored in a single binary, little endian file with extensione2eorE2E. It contains a header, a directory that is split in chunks of entries in a single-linked list, and data chunks. The high-level structure is this:

    uocte / Topcon File Format View History

    Because no specification of this file format was available for the development of uocte, the file format was reverse engineered for interoperability. The information on this page therefore is incomplete and may be incorrect. It only serves to document which parts of the data are interpreted by uocte and which assumptions it makes concerning interpretation.

    uocte / NIDEK File Format View History

    Because no specification of this file format was available for the development of uocte, the file format was reverse engineered for interoperability. The information on this page therefore is incomplete and may be incorrect. It only serves to document which parts of the data are interpreted by uocte and which assumptions it makes concerning interpretation.

    .

    File Format NotesUOCTML

    Eyetec

    Heidelberg

    NIDEK

    Topcon

    Zeiss

    Reverse-engineered by Paul Rosenthal to have file readers for proprietary bit ordering

  • Typical volumetric medical formatsDICOM NIFTI .NIIANALYZE .HDR, .IMG

    brainder.org

    http://nipy.org/nibabel/gettingstarted.html

    mathworks.com

    http://people.cas.sc.edu/rorden/dicom/index.htmlhttp://dicom.nema.org/

    NEMA standard PS3, and as ISO standard 12052:2006

    Practically outdated

    An Analyze 7.5 data set consists of two files:

    Header file (something.hdr): Provides dimensional, identifying and some processing history information

    Image file (something.img): Stream of voxels, whose datatype and ordering are described by the header file

    These links also describe the Analyze format in more detail:

    Mayo/Analyze description of file format.

    SPM/FIL description of format(this is a less detailed description that the SPM99 help system provides - see above).However, note that the SPM version of the Analyze format uses a couple of the header fields in an unconventional way (see below)

    The Nifti format has rapidly replaced the Analyze in neuroimaging research, being adopted as the default format by some of the most widespread public domain software packages, as, FSL [12], SPM [13], and AFNI [14]. The format is supported by many viewers and image analysis software like 3D Slicer [15], ImageJ [16], and OsiriX, as well as other emerging software like R [17] and Nibabel [18], besides various conversion utilities.

    An update version of the standard, the Nifti-2, developed to manage larger data set has been defined in the 2011. This new version encode each of the dimensions of an image matrix with a 64-bit integer instead of a 16-bit as in the Nifti-1, eliminating the restriction of having a size limit of 32,767. This updated version maintains almost all the characteristics of the Nifti-1 but, as reserve for some header fields the double precision, comes with a header of 544 bytes [19].

    Use this

    doi:10.1007/s10278-013-9657-9

    http://elcvia.cvc.uab.es/article/view/680/pdf_4http://www.vision.ee.ethz.ch/~cvlsegmentation/driu/downloads.htmlhttps://www.kaggle.com/c/diabetic-retinopathy-detection/data

  • NIFTI .nii

    Pythonhttp://nipy.org/nibabel/

    http://slideplayer.com/slide/4703517/https://itk.org/

    https://fiji.sc/

    https://imagej.nih.gov/ij/plugins/nifti.html

    This project aims to offer easy access to Deep Learning for segmentation of structures of interest in biomedical 3D scans. It is a system that allows the easy creation of a 3D Convolutional Neural Network, which can be trained to detect and segment structures if corresponding ground truth labels are provided for training. The system processes NIFTI images, making its use straightforward for many biomedical tasks. https://github.com/Kamnitsask/deepmedic

    http://www.zeiss.com/meditec/en_de/products---solutions/ophthalmology-optometry/forum-guiding-your-decisions.htmlhttp://dx.doi.org/10.1371/journal.pone.0082922http://dx.doi.org/10.1016/j.ophtha.2009.10.029http://dx.doi.org/10.1371/journal.pone.0034823http://dicom.nema.org/standard.htmlhttp://dx.doi.org/10.1016/j.exer.2010.10.009https://www5.cs.fau.de/research/software/octseg/http://www.nidek-intl.com/product/ophthaloptom/diagnostic/dia_retina/rs-330.htmlhttp://www.moptim.com/en/http://www.optos.com/en-GB/Ultra-widefield-imaging-products/http://www.optovue.com/company/http://www.topconmedical.com/https://www.heidelbergengineering.com/us/products/spectralis-models/

  • MEDICAL File transfer

    vigilantmedical.net

    nuance.com

    http://www.dicomgrid.com/product/share

    http://www.intelemage.combusinesswire.com

    https://github.com/openeyes/OpenEyeshttps://github.com/openeyes/OpenEyeshttps://github.com/openeyes/OpenEyes/wiki/OpenEyes-Installation

  • Eye care Cloud

    https://eyenetra.com/product-insight.html

    https://bitbucket.org/uocte/uocte/wiki/Homehttps://bitbucket.org/uocte/uocte/wiki/browse/https://bitbucket.org/uocte/uocte/wiki/Heidelberg%20File%20Formathttps://bitbucket.org/uocte/uocte/wiki/history/Heidelberg%20File%20Formathttps://bitbucket.org/uocte/uocte/wiki/UOCTML%20File%20Formathttps://bitbucket.org/uocte/uocte/wiki/Eyetec%20File%20Formathttps://bitbucket.org/uocte/uocte/wiki/Heidelberg%20File%20Formathttps://bitbucket.org/uocte/uocte/wiki/NIDEK%20File%20Formathttps://bitbucket.org/uocte/uocte/wiki/Topcon%20File%20Formathttps://bitbucket.org/uocte/uocte/wiki/Zeiss%20File%20Format

  • Labels and needed Data Quantity

    https://brainder.org/2012/09/23/the-nifti-file-format/http://nipy.org/nibabel/gettingstarted.htmlhttps://www.mathworks.com/matlabcentral/fileexchange/42997-dicom-to-nifti-converter--nifti-tool-and-viewerhttp://people.cas.sc.edu/rorden/dicom/index.htmlhttp://dicom.nema.org/http://www.mayo.edu/bir/PDF/ANALYZE75.pdfhttp://www.fil.ion.ucl.ac.uk/spm/distrib.htmlhttp://link.springer.com.libproxy.aalto.fi/article/10.1007%2Fs10278-013-9657-9#CR12http://link.springer.com.libproxy.aalto.fi/article/10.1007%2Fs10278-013-9657-9#CR13http://link.springer.com.libproxy.aalto.fi/article/10.1007%2Fs10278-013-9657-9#CR14http://link.springer.com.libproxy.aalto.fi/article/10.1007%2Fs10278-013-9657-9#CR15http://link.springer.com.libproxy.aalto.fi/article/10.1007%2Fs10278-013-9657-9#CR16http://link.springer.com.libproxy.aalto.fi/article/10.1007%2Fs10278-013-9657-9#CR17http://link.springer.com.libproxy.aalto.fi/article/10.1007%2Fs10278-013-9657-9#CR18http://link.springer.com.libproxy.aalto.fi/article/10.1007%2Fs10278-013-9657-9#CR19http://dx.doi.org/10.1007/s10278-013-9657-9http://people.cas.sc.edu/rorden/dicom/index.htmlhttp://imaging.mrc-cbu.cam.ac.uk/imaging/FormatAnalyze

  • Number of images needed? There is rule-of-thumb (#1)stating that one should have 10x the number of samples as

    parameters in the network (for more formal approach, see VC dimension), and for example the ResNet (He et al. 2015) in the ILSVRC2015 challenge had around 1.7M parameters, thus requiring 17M images with this rule-of-thumb.

    Zagoruyko et al. (2016)

    http://nipy.org/nibabel/http://slideplayer.com/slide/4703517/https://itk.org/https://fiji.sc/https://imagej.nih.gov/ij/plugins/nifti.htmlhttps://github.com/Kamnitsask/deepmedic

  • 17 million images? Not necessarily

    Synthetically increase the number of training sample by distorting them in way expected from the dataset (random xy-shifts, left-right flips, add gaussian noise, blur, etc.)

    For example Krizhevsky et al. (2012) from UToronto who pushed deep learning into mainstream increased their training set (15M images from ImageNet) by a factor of 2,048 with image translations. Futhermore they apply RGB intensity alterations with unspecified factor.

    This have shown to reduce overfitting

    We would still need 8,300 images (17M/2,048) with the same augmentation scheme

    DATA AUGMENTATION

    Images from: ftp://ftp.dca.fee.unicamp.br/pub/docs/vonzuben/ia353_1s15/topico10_IA353_1s2015.pdf | Wu et al. (2015)

    Khler et al. (2013)

    http://vigilantmedical.net/#homehttp://www.nuance.com/products/powershare-medical-image-exchange/index.htmhttp://www.dicomgrid.com/product/sharehttp://www.intelemage.com/http://www.businesswire.com/news/home/20160404005170/en/Medidata-Acquires-Intelemage%C2%AE-Global-Leader-Managing-Transferringhttp://dx.doi.org/10.1016/j.cmpb.2015.05.010

  • ~8,300 retinal images?ImageNet -based transfer learning for Medical analysis Tajbakhsh et al. (2016) used the 'original' pre-trained AlexNet (in Caffe) by

    Krizhevsky et al. (2012) with 60M parameters, and fine-tuned it for medical image analysis.

    Very modest-sized datasets outperformed the hand-crafted methods that they selected.

    [65] N. Tajbakhsh, Automatic assessment of image informativeness in colonoscopy, Discrete Cosine Transform-based feature engineering

    [60] J. Liang and J. Bi,Computer aided detection of pulmonary embolism with tobogganing and multiple instance classification in CT pulmonary angiography,. - A set of 116 descriptive properties, called features, are computed for each candidate

    database consisting of 121 CT pulmonary angiography (CTPA), datasets with a total of 326 pulmonary embolisms (PEs)

    6 complete colonoscopy videos. 40,000 frames

    https://eyenetra.com/product-insight.html

  • NOIsy labelsNow we have a 'circular' problem where our diagnosis labels come from human experts that we know to do suboptimal job.

    If human expert reach AUCROC= 0.8, and we get AUC of 1.0, what would that mean in practice?

    Unlike in ImageNet where correct dog breeds are relatively easy to get right with proper dog experts, the 'real pathology' becomes more ambigous.

    Considering the recent success of deep learning (Krizhevsky et al., 2012; Taigman et al., 2014; Sermanet et al., 2014), there is relatively

    little work on their application to noisy data- Sukhbaatar et al. (2014)

    http://dx.doi.org/10.1109/TNNLS.2013.2292894

    http://arxiv.org/abs/1607.06988

    http://dx.doi.org/10.1177/1062860609354639

  • Gold standard Beyond typical machine learningAbstract

    Despite the accelerating pace of scientific discovery, the current clinical research enterprise does not sufficiently address pressing clinical questions. Given the constraints on clinical trials, for a majority of clinical questions, the only relevant data available to aid in decision making are based on observation and experience. Our purpose here is 3-fold. First, we describe the classic context of medical research guided by Poppers scientific epistemology of falsificationism. Second, we discuss challenges and shortcomings of randomized controlled trials and present the potential of observational studies based on big data. Third, we cover several obstacles related to the use of observational (retrospective) data in clinical studies. We conclude that randomized controlled trials are not at risk for extinction, but innovations in statistics, machine learning, and big data analytics may generate a completely new ecosystem for exploration and validation.

    http://dx.doi.org/10.2196%2Fjmir.5549

    http://dx.doi.org/10.1590%2F2176-9451.19.5.027-030.ebo

    http://dx.doi.org/10.3310/hta11500

    http://dx.doi.org/10.1197/jamia.M1733

    Information retrieval studies that involve searching the Internet or marking phrases usually lack a well-defined number of negative cases. This prevents the use of traditional interrater reliability metrics like the statistic to assess the quality of expert-generated gold standards. Such studies often quantify system performance as precision, recall, and F-measure, or as agreement. It can be shown that the average F-measure among pairs of experts is numerically identical to the average positive specific agreement among experts and that approaches these measures as the number of negative cases grows large. Positive specific agreementor the equivalent F-measuremay be an appropriate way to quantify interrater reliability and therefore to assess the reliability of a gold standard in these studies.

    https://lazyprogrammer.me/deep-learning-tutorial-part-23-artificial-neural/https://www.quora.com/What-is-the-VC-dimension-of-a-Neural-Networkhttp://arxiv.org/abs/1502.01852http://arxiv.org/abs/1605.07146

  • Crowdsourcing labels #1

    http://dx.doi.org/10.1109/TMI.2016.2528120

    http://papers.nips.cc/paper/4824-imagenet-classification-wftp://ftp.dca.fee.unicamp.br/pub/docs/vonzuben/ia353_1s15/topico10_IA353_1s2015.pdfhttps://arxiv.org/abs/1501.02876http://www5.informatik.uni-erlangen.de/Forschung/Publikationen/2013/Koehler13-ANQ.pdf

  • Crowdsourcing labels #2 Gamify the segmentation process for electron microscopy

    https://www.youtube.com/watch?v=c43jVfpzvZ0

    https://www.youtube.com/watch?v=8L_ATqjfjbYhttps://www.youtube.com/watch?v=bwcuhbj2rSI

    EyeWire, http://eyewire.org/explore

    http://dx.doi.org/10.1109/TMI.2016.2535302https://github.com/BVLC/caffe/wiki/Borrowing-Weights-from-a-Pretrained-Networkhttps://github.com/BVLC/caffe/tree/master/models/bvlc_alexnethttp://www.cs.toronto.edu/~fritz/absps/imagenet.pdfhttp://dx.doi.org/10.1007/978-3-319-13692-9_14http://dx.doi.org/10.1007/978-3-540-73273-0_52

  • Active Learning

    http://arxiv.org/abs/1406.2080http://dx.doi.org/10.1109/TNNLS.2013.2292894http://arxiv.org/abs/1607.06988http://dx.doi.org/10.1177/1062860609354639http://dx.doi.org/10.1109/ICASSP.2016.7472164http://dx.doi.org/10.1016/j.patcog.2015.09.020http://www.cv-foundation.org/openacc