• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

iBiology

Bringing the World's Best Biology to You

  • Start Here
    • Explore iBiology
    • All Playlists
    • All Speakers
    • All Talks
    • What’s new at iBiology?
  • Research Talks
    • Talks by Topic
      • Biochemistry
      • Bioengineering
      • Biophysics
      • Cell Biology
      • Development and Stem Cells
      • Ecology
      • Evolution
      • Genetics and Gene Regulation
      • Human Disease
      • Immunology
      • Microbiology
      • Neuroscience
      • Plant Biology
      • Techniques
      • Archive
    • Talks by Series
      • Bench to Bedside
      • Famous Discoveries
      • Great Unanswered Questions
      • Microscopy Series
      • Share Your Research Series
  • Stories
    • Background to Breakthrough
    • Interviews and Profiles
    • Science and Society
  • Courses
  • Careers
    • Professional Development Talks
    • Professional Development Courses
    • Career Exploration
    • NRMN Resources
    • Biomedical Workforce
  • Educators
  • About
    • Mission
    • Commitment to Inclusion
    • iBiology Team
    • Board of Directors
    • iBiology Funders and Partnerships
    • Contact Us
  • Donate
Home » Courses » Microscopy Series » Robotics, Detection and Image Analysis

Image Analysis

  • Duration: 29:26
  • Downloads
    • Hi-Res
    • Low-Res
  • Subtitles
    • English
  • Transcript

00:00:11.15 So I'm Kurt Thorn and I'm going to talk to you now
00:00:14.03 about part two of our lecture on digital image analysis.
00:00:17.23 And specifically, I'm going to cover now some basic image
00:00:21.00 analysis techniques and algorithms and tools that go into just
00:00:25.01 about any kind of image processing you're likely to do.
00:00:27.24 So just to refresh your memory here, we talked about this in part
00:00:32.09 one of this lecture, but a digital image is nothing more than many
00:00:35.25 measurements of light intensity in your sample. And so,
00:00:39.06 you have this image, it's got many points in it, you can zoom
00:00:41.26 in and see the individual pixels that make this up. And
00:00:45.01 each pixel here is represented just by a digital number
00:00:47.14 that tells you how bright it is. And in this image here, they go from 0
00:00:50.25 to 255. And so most digital image analysis works on these
00:00:56.04 underlying numbers to do mathematical calculations that will
00:00:59.20 enhance or suppress certain features in your image, or
00:01:04.03 extract information from it for you to analyze.
00:01:07.24 So I first want to talk a little bit about how you would normally
00:01:12.07 correct an image for common microscope artifacts.
00:01:16.07 Artifacts isn't almost the right term, but just things that go
00:01:20.14 into your image that make it not quite as linear and as
00:01:24.05 uniform as you'd like it to be. And the first one is background
00:01:26.27 correction. And any image you take will have background
00:01:31.12 from various sources. Some of it is truly mathematical
00:01:35.08 in a sense, because your camera has a non-zero offset.
00:01:37.14 So your camera doesn't read zero if no light goes through it,
00:01:39.21 it always reads some finite positive number, it's usually
00:01:42.24 a few hundred or a thousand. And so you'll never see
00:01:45.24 intensities below that in your image. But then you can have
00:01:49.04 real background, physical background, which comes from
00:01:51.10 media autofluorescence, background fluorescence from your
00:01:55.13 sample, stray light in your room, and the fact that your room
00:01:58.12 isn't totally dark and you're picking up room lights in your
00:02:00.08 image. And ideally if you're doing an image analysis,
00:02:05.06 particularly if you're doing a quantitative analysis, you'd like
00:02:07.27 zero to be meaningful in the sense that you'd like
00:02:10.06 zero to reflect what you see when there's no real
00:02:12.23 fluorescence from your sample. So that if there's
00:02:16.10 no fluorescent protein or antibody staining, you'd
00:02:19.06 read zero and not 200 or 1000 or whatever.
00:02:22.14 And so generally, the way you deal with this is
00:02:26.01 background subtraction. You try and measure or estimate
00:02:29.07 your background in your image somehow, and then you subtract
00:02:33.00 that off. One way to do this is taking a camera dark image,
00:02:37.09 so take an image where no light is coming into your camera.
00:02:40.06 And then subtract that off. Or you can take an image
00:02:44.24 of let's say unstained sample or a mock-stained sample
00:02:48.05 that doesn't have any dye in it, and estimate what the
00:02:50.13 background intensity is there and subtract that off.
00:02:52.23 And then there's also approaches where you can estimate
00:02:54.25 the background from the image, and that's what I'll mention first.
00:02:57.16 So here's an image of a cell, well there's actually one bright
00:03:01.02 cell here, and a sort of dim cell over there. And another
00:03:03.19 dim cell over here. And if we plot the pixel intensity
00:03:07.07 histogram of this image. So here we're just plotting on
00:03:10.07 the y-axis the number of pixels with a given intensity,
00:03:13.09 and on the x-axis that intensity. You can see the intensity here goes
00:03:16.25 up to about 10 or 30000. And this image here, you can see
00:03:26.24 is mostly background, it's mostly this dark background.
00:03:29.13 And so this histogram is dominated by this huge peak
00:03:32.01 here, which represents all the background pixels.
00:03:34.14 And then the cells are this sort of tail end of this
00:03:37.04 histogram, this little bump right down here. But
00:03:40.18 since most of this image is background, it's pretty
00:03:42.09 easy to figure out what that background value is.
00:03:44.15 And if we zoom in on that peak there, you can see that
00:03:47.18 the peak here is centered around 1200 or so. And so
00:03:51.19 we could work out, say, the modal value of this image
00:03:54.19 or we could fit a Gaussian to just this peak, or look at
00:03:58.29 the median or some other statistical measure which will
00:04:02.02 basically tell us what that background value is. And how
00:04:04.29 we could subtract that off from every pixel in our image,
00:04:07.15 and then we would measure roughly zero for where
00:04:11.01 we don't have any real biological fluorescence. And so this is
00:04:16.04 the right commonly used approach to estimate background
00:04:18.26 from images. You can do more sophisticated things
00:04:21.09 where you look in local windows, so you don't use a single
00:04:25.03 background for the whole image, but you try and
00:04:27.04 estimate what the background is in different regions
00:04:29.17 independently, in case your background's not uniform.
00:04:31.24 But if you have good images with a pretty uniform background,
00:04:35.23 then this very simple method works quite well.
00:04:38.13 As I mentioned, you can also acquire a dark image
00:04:42.01 and this is actually very useful because it tells you something
00:04:44.26 about your camera and your instrument background.
00:04:47.04 And this you do by just blocking light going to the camera,
00:04:50.03 you close a shutter, or you even take the camera off
00:04:53.14 and cap it. And then you just take an image. And this
00:04:58.00 allows you to measure both the camera offset and
00:05:00.19 dark current and read noise. And then it also gives you
00:05:04.04 an estimate of what your real minimum background
00:05:06.19 is, your true background when no light is arriving at the
00:05:09.13 camera. And this can be very handy because it lets you
00:05:11.22 figure out what's real and what's autofluorescence.
00:05:14.04 So if you see your image and you measure a value of
00:05:17.02 1000 in the background, it's hard to know if that's because
00:05:20.28 I have bad autofluorescence or because my camera
00:05:24.01 just has that offset. So if you take your camera off and you
00:05:27.06 measure an offset of 200, then you know, well gee,
00:05:30.11 there's 800 digital numbers here above that offset,
00:05:33.12 that must be coming from autofluorescence or stray light or
00:05:36.13 other things that maybe I can get rid of. And this is what a
00:05:40.16 dark image looks like. This is what a good dark image
00:05:43.13 looks like. If you have a cheap camera or one that has
00:05:45.24 problems, you'll often see that it's not so uniform.
00:05:47.23 But an ideal dark image looks sort of like this, it's just
00:05:50.26 this featureless gray image with a little bit of noise on it.
00:05:55.22 And so you can then subtract this off from your images and that
00:05:58.22 would correct for this camera offset. In general if you were going to
00:06:03.01 do that, you'd want to acquire a number of these and then
00:06:04.21 average them to minimize the noise in there.
00:06:07.04 Another very common source of error in your image
00:06:12.05 that comes from microscope aberrations is what's called a
00:06:16.22 shading correction, or non-uniform illumination. And this
00:06:20.23 just comes from the fact that it's very hard to get
00:06:22.08 truly uniform illumination over your sample, and so
00:06:25.19 you may see that pixels in the center of your sample are brighter
00:06:28.16 than pixels at the edge. And in general, you'd like to correct
00:06:33.04 for this too. You don't want to bias your results by measuring
00:06:36.08 brighter cells in the center of the field of view, and so
00:06:39.01 if you have some samples that have a lot of cells in the middle,
00:06:41.08 they'll read as brighter than some samples that have a lot of cells
00:06:44.05 at the edge of the field of view. And correcting for this
00:06:47.25 is also quite simple. You just image a uniform fluorescent
00:06:50.27 sample. And commonly what we use are just pieces
00:06:54.14 of fluorescent plastic cut to microscope-sized sizes.
00:06:57.19 But you can also use uniform solutions of fluorescent
00:07:00.14 dye, suspended between a coverslip and a slide.
00:07:03.12 Or any other sample that's fluorescent and will be
00:07:07.04 very uniformly distributed. And so if you take a picture of one
00:07:11.14 of these samples, you usually see something like this.
00:07:14.05 Generally it's brightest in the center and it tails off as you
00:07:16.13 get to the edges. Sometimes the brightness is not centered,
00:07:21.19 the bright spot is at the corner or whatever. But it looks something
00:07:25.04 like this, and usually the variation intensity is around
00:07:27.17 20 or 30% over the field of view. And so you can divide
00:07:31.11 your image by this to correct for the non-uniformity
00:07:33.26 in the illumination. There are also generally non-uniform
00:07:37.14 in the detection, and that's sort of all rolled together here, but
00:07:39.19 for most purposes, that's an okay assumption to make.
00:07:42.21 And so if you put these both together, this is how you would
00:07:47.05 do that analysis. So you would measure some image here that's
00:07:50.08 this I meas, and it's equal to your true image in the absence
00:07:54.10 of these problems. The true image times the shading
00:07:58.05 correction, so your true image is multiplied by whatever this
00:08:03.24 non-uniformity is, and then you add to that this dark image.
00:08:06.18 And so just doing some algebra, you can get out that your
00:08:10.20 true Image is equal to your measured image minus
00:08:13.02 the dark image, divided by the shading image.
00:08:16.20 And in general, this is something you probably want to
00:08:21.01 do on your images if you're trying to do very quantitative
00:08:23.12 imaging, or if you're doing things like image stitching or
00:08:26.29 tiled images, where you want to make sure that they don't
00:08:30.16 have any checkerboarding and that they have a very uniform
00:08:32.23 intensity across the field of view. In general, this is a good
00:08:37.01 procedure to do. And almost any software package should
00:08:40.06 be able to do this kind of simple image math where you're
00:08:42.19 subtracting one image from another image, or dividing
00:08:46.00 one image by another image. You do need to pay a little
00:08:49.20 attention here, in general you'd like to normalize your
00:08:52.10 shading image such that its mean is 1, so that you don't change
00:08:56.22 the intensities when you do this division. And you also
00:09:00.14 therefore need to make sure you use what's called a
00:09:03.22 floating point format to store this image. That you don't store
00:09:08.03 it as just integer numbers, 0 to 255, or 0 to 65000, or whatever.
00:09:13.09 Because then normalizing this to 1 would just make it all
00:09:15.14 zeros or ones. You want to have floating points, which means it has
00:09:19.26 decimal places so that the value can be 0.99 or 1.02, or
00:09:24.24 whatever. But pretty much any software package will let you do that.
00:09:29.08 So that's the first thing I wanted to talk about, just these
00:09:33.24 basic image corrections. And now I want to get more into
00:09:37.05 image analysis or image processing, where you're applying
00:09:42.19 manipulations to try and enhance or suppress certain
00:09:45.27 features. And an extremely common and very powerful
00:09:49.13 technique is what's called digital image filtering.
00:09:52.08 And the idea here is you have what's called a kernel,
00:09:55.13 so this little matrix of numbers here. And at every
00:09:59.10 pixel in your image, you go and apply this kernel
00:10:01.17 and calculate some product between the kernel and
00:10:05.22 that portion of the image, and then replace that pixel
00:10:09.04 in the image with that product. And so the way this works is
00:10:12.16 here we've got a little 3x3 square here. And the idea is if we want
00:10:16.23 to apply this to a single pixel in the image, we'd center this
00:10:21.25 3x3 box on that pixel in the image and then go around and
00:10:25.29 take the pixels around it and multiply them by these numbers,
00:10:28.13 add them all together, and then replace the central
00:10:30.24 pixel with that value. And so what this particular kernel does
00:10:34.10 is it does a smoothing, or an averaging filter. And so
00:10:38.21 it takes this 3x3 neighborhood and replaces each pixel
00:10:41.19 with the average of its neighboring pixels. You can also
00:10:45.22 do that with what's called a Gaussian smoothing kernel.
00:10:48.09 Where instead of being just a uniform set of values over
00:10:52.09 the kernel size, here we weight the center more highly,
00:10:55.24 we weight these in a Gaussian fashion, so we weight the center
00:10:59.00 highly and then roll off towards the edges. And how this
00:11:03.07 works is, here's say our little 5x5 image we want to filter.
00:11:08.24 So, we take our 3x3 kernel here, we then drop it on
00:11:14.02 this 8 here. And so now we go through one by one,
00:11:17.12 and we say okay, here's 10x1 is 10, 11x1 is 11, and so on.
00:11:22.02 And we multiply our way through and we add these all up, and
00:11:25.28 eventually you get a value of 14 out of that. And so you would
00:11:29.24 go in here and then replace that original 8 with 14, because
00:11:33.25 that's the average of these 9 pixels. Here's another example
00:11:39.05 of this, where we've got a little more complicated image.
00:11:41.15 So we've got this image here, and we want to use this
00:11:43.23 simple 3x3 smoothing filter on it. And here's what the
00:11:47.15 digital values in that image are. You can see these
00:11:51.06 values here that are 113, which correspond to this gray line
00:11:54.00 here. And then these values that are 255, which correspond to the
00:11:57.01 white diagonal line there. And we're going to put the filtered values
00:12:01.17 over here. So the idea is that we take the first corner
00:12:05.26 of this image, drop that smoothing matrix on, and then
00:12:09.12 multiply and average those guys. And that gives us a value
00:12:13.18 here. Do it one over for the next value, and so on, and so
00:12:18.05 forth. And we do it for the whole image, we get this.
00:12:22.11 And that looks like this. And so you can see we've averaged
00:12:24.28 out and blurred these structures here. One thing
00:12:29.05 if you were looking at this closely, what you may wonder about is
00:12:31.21 how do you deal with these edges? We're applying this
00:12:35.09 3x3 kernel here and so if we want to filter something
00:12:37.19 in this top edge here, we need to figure out what the numbers
00:12:40.08 up here, one row above that edge should be. And obviously there
00:12:44.16 aren't any numbers there. And so there's a number of
00:12:47.04 different approaches that people commonly use to deal with this.
00:12:49.14 The simplest is probably just to assume that any numbers
00:12:53.18 outside of your original image are zero. But equally,
00:12:57.16 you can generate those numbers by taking this top row
00:13:01.02 and duplicating it, or wrapping around so that the top row
00:13:04.05 is connected to the bottom row. And then you would get
00:13:06.22 your numbers from there, and I think that's what was done here.
00:13:09.20 The net effect of this kind of kernel is to do a smoothing
00:13:15.12 or an averaging of your image. And as I said before, you can also
00:13:19.08 use a Gaussian kernel, which looks like this. And if you
00:13:22.15 look at what it looks like, it looks like this. And so it's
00:13:24.21 basically what you'd expect, it's bright in the center and then
00:13:27.08 it falls off smoothly as you move out from the center.
00:13:29.27 And now that'd do a similar kind of blurring operation, except
00:13:32.27 it would be more weighted just towards the pixel in the
00:13:36.10 center and its nearby pixels. So, why do you want to
00:13:41.27 do this? You know, there are many reasons why you might, you might
00:13:45.07 want to generate a blurry image of your cells just to
00:13:49.14 smooth out small artifacts or to just estimate what the
00:13:55.00 mean brightness of something is, not dominated by
00:13:57.12 small things. But one very interesting reason is
00:14:01.11 because if your image is sampled appropriately at what's called
00:14:04.07 the Nyquist limit, and the camera lecture talks about this,
00:14:07.20 the point spread function of your microscope, so the minimum
00:14:11.19 resolvable element of your microscope, would be spread out
00:14:14.08 over many pixels. And that means that the information
00:14:18.10 in adjacent pixels is correlated with each other, and they're not
00:14:22.10 completely independent. And there's some redundancy there.
00:14:25.01 And if you use deconvolution, you'd take advantage of that
00:14:28.23 redundancy and then sort of mathematically correct
00:14:30.24 that to sort of reconstruct a more accurate image by
00:14:36.07 using that redundancy to infer what the underlying data
00:14:39.27 that gave rise to your image was. But even if you're not
00:14:43.10 doing deconvolution, you can exploit some of that
00:14:46.04 redundancy by smoothing with a Gaussian kernel, that's sort of the
00:14:49.14 same size as your point spread function. And part of why
00:14:53.26 that helps is because since there is redundancy in your image,
00:14:56.16 if you have a single pixel that's really bright and everything
00:14:59.06 around it is dim, you know that can't be real because
00:15:02.12 any real bright object in your sample would be spread out
00:15:04.22 over multiple pixels. And so a single bright pixel like that
00:15:08.06 has to come from some kind of noise artifact. And so
00:15:11.14 smoothing will help suppress those artifacts and give you
00:15:14.11 a better representation of what your underlying image
00:15:16.17 looks like. And so here's an actual PSF. So here's a
00:15:21.09 point spread function measured from a microscope, and here's that
00:15:24.17 Gaussian kernel again. And you can see that they do look
00:15:26.14 fairly similar, and in fact, if we took a little bigger Gaussian
00:15:28.20 it would be an even better match to this. And so the idea
00:15:31.29 is that this is a good approximation to the underlying
00:15:35.22 distribution that our microscope generates, and so smoothing
00:15:37.25 like this would remove any sort of nonsensical hot
00:15:41.15 single pixels or bright single pixels that couldn't have come
00:15:43.23 from a real object, which would instead generate a bright
00:15:46.18 image like that. And so here's a sort of artificial example
00:15:51.23 where we've taken an image, a noisy image, this was actually
00:15:57.10 a computer generated noisy image, but the principle is the same
00:16:00.10 for real noisy images. And you can see there's structure in
00:16:03.16 here, but it's kind of hard to see it. And when we do this
00:16:06.14 Gaussian smoothing filter, you can see these structures are now
00:16:09.00 easier to see because we've averaged out some of the
00:16:11.11 noise. But we haven't averaged out the real structures by
00:16:14.18 as much, because they're about the same as our averaging
00:16:17.00 filter. Here's a bigger example, these are real microscopy
00:16:21.18 images. It's a little harder to see here, but you can see
00:16:24.12 here's the original image and then on the bottom
00:16:26.16 here, we've filtered it with a Gaussian filter. And again,
00:16:30.08 it's a little bit easier to see structures in there, particularly
00:16:32.27 if you zoom in on it. However, these kernels can be used
00:16:38.21 for many different kinds of operations, not just smoothing.
00:16:40.26 Another common one is edge detection. And so you can sort
00:16:45.20 of see what we're doing here if you just look at these filters.
00:16:48.04 So there are two different filters here, the one on the top
00:16:51.07 and one on the bottom. They do essentially the same thing,
00:16:53.25 they take the difference between one row, so this top row
00:16:57.23 here, and the row two away from it. And so, if there is
00:17:02.16 a bright thing in this row and a dim thing in this row, we'll get
00:17:06.12 a bright output. But if these two are about the same, you'll get
00:17:10.23 a zero output. And here we're doing the same thing, but we're just now
00:17:14.23 biasing a little bit to be more centered around the pixel
00:17:21.09 vertically, take more information from the vertical pixel in the
00:17:24.08 center there, and not the ones to the left and the right.
00:17:26.06 And so what these things are going to do is enhance
00:17:29.10 edges that run horizontally through our image. And in fact,
00:17:35.07 if we apply that to this image here. If we apply the top
00:17:37.08 filter to this image, here's what you get. You see that
00:17:39.25 these actin filaments that are running roughly horizontal
00:17:42.08 here jump right out, the ones that are running vertical are totally
00:17:45.26 suppressed. And the ones that are running at a diagonal
00:17:49.14 sort of show up somewhat. You can equally well rotate
00:17:53.07 each of these filters 90 degrees and get a set of vertical
00:17:55.12 edge detection filters. And if you apply a vertical filter to the
00:18:00.02 image and add those two images together, you get
00:18:05.25 kind of a detection of all edges, regardless of orientation
00:18:09.02 in the image. So this can be very handy for finding boundaries
00:18:13.05 of things. Any type of sharp edge you want to bring out in your image.
00:18:19.02 And then there are similar sorts of filters that are edge
00:18:23.01 detectors that instead of looking for edges, you're now looking
00:18:26.14 for just bright objects surrounded by dim objects. And these are
00:18:29.29 contrast enhancement filters of various sorts, you can see
00:18:32.26 here's one here, where we're taking a very large value times the
00:18:35.15 center pixel, and then subtracting off a small amount
00:18:38.23 of information from all the neighboring stuff. And there's a number of
00:18:42.20 different varieties of these. They go by different names and have
00:18:45.20 slightly different shapes, but they all have a general principle of a
00:18:48.22 bright central object and then subtracting off its neighboring
00:18:51.01 pixels. So these are called things like unsharp masking, or
00:18:54.03 unsharp filters. That's a sort of old technical photography term.
00:18:59.06 Laplacian filters or Laplacian of Gaussian filters. They all do basically
00:19:04.16 the same thing, they differ in the details, but basically all do this contrast
00:19:08.10 enhancement. And you can see that if we apply that to this image again,
00:19:10.29 that really makes these actin filaments pop out. But at the same time,
00:19:14.21 it also highlights sort of this speckly noise and stuff in this image.
00:19:18.03 So these are both good and bad, depending on what you're
00:19:20.24 trying to do. They tend to enhance noise, but they also enhance
00:19:22.27 contrast in your image. You can also use nonlinear filters.
00:19:29.29 So everything we've talked about so far has been a linear
00:19:32.26 filter, meaning the output is a linear function of the input.
00:19:35.21 It's just generated by multiplying pixels by the kernel values and
00:19:39.27 then averaging. But you can also do nonlinear filtering, where you
00:19:43.25 do things like median filtering, where you take a box,
00:19:46.24 a 3x3 box, you pick out the median value of those 9 pixels and
00:19:53.01 replace the central pixel with the median of those 9 pixels.
00:19:56.00 These have different properties than the linear filters.
00:20:01.24 One thing they're good for is smoothing while maintaining
00:20:04.12 edges in your image. They're also very good at removing hot
00:20:07.21 pixels or other single pixel garbage in your image.
00:20:12.25 So that's what I wanted to say about filtering, and the next
00:20:17.08 topic I want to talk about is thresholding. And thresholding
00:20:22.26 is an extremely common technique for identifying objects
00:20:25.08 in images. And so the idea here is if you look at this image
00:20:28.13 on the left here, you can see that there's a bunch of nuclei in it.
00:20:31.21 These bright white objects. And then there's a lot of background.
00:20:34.04 And we'd like to automatically separate these nuclei from the
00:20:38.21 background, automatically identify where they are and which
00:20:41.11 pixels are in the nuclei, as opposed to which pixels are in the
00:20:44.20 background. And the idea here is if you look at the pixel
00:20:48.09 intensity histogram, so here's the histogram, this time it's
00:20:51.04 plotted on a log scale so you can see the dimmer information
00:20:53.05 better. Again, like that first pixel intensity histogram
00:20:56.25 I showed you here, it's dominated by the background. There's this
00:21:00.01 huge peak right here of the background data, all those
00:21:03.03 pixels that are black and in the background, and then it very rapidly
00:21:06.10 falls off. And then you see this long tail, which eventually
00:21:09.15 goes away. And that corresponds to the white objects here
00:21:12.29 to the nuclei. And so the idea here behind thresholding is
00:21:16.11 that we can by just picking an intensity somewhere in this range
00:21:19.28 here in between those two objects, we can separate
00:21:23.03 this into foreground and background, into nuclei and background.
00:21:26.13 So the idea is that we would maybe take everything above
00:21:29.18 this value in this red box here, and now if we just color
00:21:34.12 those pixels red on this image, you see that we've picked out all of our
00:21:37.10 nuclei. And for images like this, where there's very good separation
00:21:42.10 between foreground and background, between the objects
00:21:44.12 we're trying to segment and the background, this can work
00:21:46.27 extremely well. As you can see here, it basically works perfectly.
00:21:51.01 The bigger question then is how do you do this automatically?
00:21:55.25 And so here's a little bit of a tougher case, this is actually not
00:21:59.08 a microscoped image, but a dot blot. And you can see
00:22:02.29 there's some really dark objects here and some objects that are
00:22:05.11 clearly there but they're not a whole lot darker than the
00:22:07.06 background. And so setting the cutoff here matters quite a bit.
00:22:12.02 If you set it too high, you miss these dim objects. If you
00:22:15.11 set it too low, you start picking up a lot of background noise.
00:22:17.27 And also, you would like to be able to do this threshold
00:22:22.19 choice automatically, or at least in a sort of unbiased
00:22:25.24 objective way. And so, this is just a screenshot from ImageJ
00:22:30.17 here from their thresholding, and you'll see here this auto
00:22:32.16 button here. And if you use that button here, it will automatically
00:22:35.29 pick the threshold. And this is the threshold it picks, you can see
00:22:39.10 it does a pretty good job. It gets most of the dark objects
00:22:41.27 but not quite all of them. And how does this work?
00:22:46.10 So it turns out in ImageJ, there are many, many different
00:22:49.10 algorithms it can use for doing automatic contrast or automatic
00:22:53.05 threshold generation. But the most common one you'll see
00:22:57.05 probably is what's called Otsu's method. And this basically
00:23:01.11 works by assuming your image is made up of background pixels and
00:23:05.25 foreground pixels, and that those are both roughly Gaussian
00:23:10.00 distributed, and it looks for basically the minimum between
00:23:13.17 those two distributions and then sets the threshold there.
00:23:16.22 So here's an example from a paper of just trying to
00:23:20.00 segment the DNA in this mitotic spindle. And there's the DNA
00:23:22.24 and what it looks like, and here's what Otsu's method
00:23:25.03 detects for the appropriate threshold. And this works pretty well.
00:23:30.00 There are many others, but I think the ImageJ threshold
00:23:34.13 tool has something like 15 or 20 threshold algorithms.
00:23:38.06 Some work better on some images than others, basically because
00:23:43.01 of the assumptions that are built into them. There's a lot
00:23:47.07 of refinements on this for doing more sophisticated
00:23:49.24 ways to threshold. Image segmentation after thresholding,
00:23:53.23 where you try and separate objects that are touching,
00:23:56.27 but at the base of all these segmentations algorithms
00:24:00.25 is this idea of thresholding. That you can separate your
00:24:03.20 picture into a foreground intensity and a background
00:24:05.27 intensity in some automated manner. So one problem
00:24:10.27 with this approach I just want to mention is, assuming
00:24:13.12 we wanted to quantify nuclear intensity by this thresholding
00:24:16.28 approach, is that it's biased towards brighter objects.
00:24:19.16 And so ideally, if you're trying to quantitate something here,
00:24:24.00 you might miss objects that don't have any fluorescence at all.
00:24:26.27 They might be there, but you just can't see them.
00:24:28.27 Or they're so close to the background that they don't get picked
00:24:31.02 up by your thresholding. And so you would bias your output data.
00:24:34.27 And so ideally you'd like to use a second channel that you're not
00:24:38.08 quantifying, to independently define the objects. And the idea here
00:24:40.28 is you'd have some channel that uniformly labeled all your objects
00:24:44.25 to detect, independent of the thing you wanted to measure,
00:24:48.19 that you could in a rigorous and unbiased fashion, pick out
00:24:51.05 all the objects. And then inside those objects that you
00:24:54.01 detected, you would just quantify this other signal.
00:24:56.17 The other thing that thresholding does is that it gives you
00:25:03.16 a binary image out. It's 1 inside an object and then 0
00:25:07.01 everywhere else in the image. And so of course this could be used to
00:25:10.11 identify objects. You just find all the things that are 1.
00:25:12.21 But this can also be then manipulated to further separate
00:25:17.12 or define where objects are. And I just wanted to touch on this
00:25:21.15 in a little bit of detail. This is again, another large area
00:25:25.17 of image processing, of binary image processing.
00:25:29.25 But I'm just going to mention a few very common techniques here.
00:25:33.12 And two common techniques are erosion and dilation.
00:25:36.29 And superficially, these look a little bit like the filtering
00:25:40.29 approaches we had before, where we had a kernel here that's
00:25:43.27 now called the structuring element. We have our image here
00:25:46.22 that's now just zeros and ones, it's the image of this little
00:25:49.05 funky looking object here. And the idea here is we are going
00:25:53.23 to apply this structuring element to this image and either
00:25:58.00 take only pixels where the structuring element completely
00:26:01.24 fits inside this object. Or we're going to create new ones in this
00:26:06.22 image wherever there is a one that matches the center of this
00:26:11.16 structuring element. So the idea in erosion is that we want to go in
00:26:15.13 and inside every 1 in this image, we put the center of the
00:26:19.16 structuring element there. And if every 1 in the structuring
00:26:22.20 element overlaps with a 1 in our binary image, then we
00:26:25.20 keep that pixel as a 1. Otherwise, we set it to 0. So if
00:26:28.27 we would put it down here, we would have 1s that overlap these
00:26:31.26 0s. And so we'd set this pixel to 0. Go here, we have 1s that would
00:26:36.27 overlap with 0s, we set that pixel to 0. But if we put this
00:26:39.17 structuring element here, you could see that 3x3
00:26:42.03 box of 1s would all match with the 1s in the image, so we would keep
00:26:44.27 this pixel as 1. So we do this, and we do this erosion here.
00:26:50.00 We get an image here that now just has two pixels in it
00:26:53.25 that are set to 1. And these are just the two pixels that are
00:26:57.04 fully inside the body of this object here, they're these two pixels
00:27:00.06 in there. So erosion is a good way to get rid of sort of stray
00:27:05.07 pixels, little things that stick out from objects. And then it's
00:27:11.15 counterpart is called dilation. And the idea here is that we go in
00:27:15.03 and we would drop this structuring element down here,
00:27:17.20 so we'll drop this structuring element here and match the
00:27:21.16 center 1 with this 1 here. And then we would set all these 0s
00:27:24.19 to 1s, wherever there's a 1 in the structuring element that overlaps
00:27:28.00 with this image. So if you do that to this guy here, you do the
00:27:31.20 dilation, now you get basically a big box here that corresponds to
00:27:36.12 dropping that structuring element on these two 1s.
00:27:38.28 And the net effect of this erosion followed by dilation
00:27:42.11 is essentially a smoothing of this image here. We've removed
00:27:46.01 these kind of pointy little small structures that stick out.
00:27:50.03 And left with just a smooth representation of it.
00:27:53.21 So erosion followed by dilation is a very common way
00:27:57.14 of smoothing boundaries in binary images.
00:28:02.29 There are a large number of other binary operations, if you go into
00:28:05.26 a program like matlab, there's probably 20. So you can find automated
00:28:10.26 versions of sequential erosion and dilation for doing smoothing.
00:28:13.19 There are operations like hole filling, so if you have 0s completely
00:28:17.28 surrounded by 1s, you set those 0s to 1s, which is very nice
00:28:21.18 for making continuous closed objects. And then there are things
00:28:25.19 that will do stuff like removing objects at borders, which is very
00:28:28.25 useful for removing data that's at the edge of your image that's
00:28:32.01 partially cut off that you wouldn't want to quantify.
00:28:33.25 So along with filtering, these binary operations are kind of
00:28:38.23 a workhorse in digital image analysis. And that's really all I wanted to
00:28:43.25 say today. This is just designed to give you a taste of the
00:28:46.02 tools that go into digital image analysis. Again, I thank Nico
00:28:51.00 Stuurman for providing many slides that I used here. And then also
00:28:55.07 just a list of a bunch of books that have a lot more detail
00:28:58.08 on this kinds of approaches that you can look into in the
00:29:00.12 future. Thank you.

This Talk
Speaker: Kurt Thorn
Recorded: April 2012
More Talks in Microscopy Series
  • Deconvolution Microscopy (David Agard)
    Deconvolution Microscopy
  • Software Control of Microscopes Nico Stuurman
    Using Software to Control Microscopes
  • Camera Calibration (Nico Stuurman)
    Camera Calibration
All Talks in Microscopy Series
Share

Talk Overview

In this guide on image analysis, Kurt Thorn shows us how and why to perform background subtraction and shading correction of digital microscope images, how digital image filters work and which ones to use, and describes thresholding and manipulation methods of binary images including erosion and dilation.

Questions

  1. Before extracting quantitative information from an image you will want to:
    1. Background subtract and shading correct
    2. Normalize by the average intensity
    3. Normalize by the average intensity of all your images
    4. Do nothing
  2. You can estimate the background by (multiple answers possible)
    1. Analyzing the histogram
    2. Take a camera dark image
    3. Take an image of unstained sample
    4. Find the lowest pixel value in the image
  3. To correct an image:
    1. Divide by the shading image and subtract the dark image
    2. Divide by the dark image and subtract the shading image
    3. Subtract the shading image and divide by the dark image
    4. Subtract the dark image and divide by the shading image
  4. A Gaussian filter will:
    1. Improve the visibility of edges in the image
    2. Invert the contrast in the image
    3. Reduce noise yet preserve many relevant features in the image
    4. Create a binary image
  5. The main use of thresholding is to:
    1. Make the image look better
    2. Identify objects
    3. Get something to erode
    4. Correct for uneven illumination

Answers

View Answers
  1. A
  2. A, B, C
  3. D
  4. C
  5. B

Speaker Bio

Kurt Thorn

Kurt Thorn

Kurt Thorn is an Assistant Professor of Biochemistry and Biophysics at UCSF and Director of the Nikon Imaging Center – a facility that provides cutting edge light microscopy equipment to UCSF researchers. Kurt can be followed on his blog at http://nic.ucsf.edu/blog/. Continue Reading

Playlist: Microscopy Series

  • Kurt Thorn on iBiology: Digital Imaging
    Introduction to Digital Images
  • Deconvolution Microscopy (David Agard)
    Deconvolution Microscopy
  • Software Control of Microscopes Nico Stuurman
    Using Software to Control Microscopes
  • Camera Calibration (Nico Stuurman)
    Camera Calibration

Reader Interactions

Comments

  1. Fernanda says

    March 25, 2020 at 7:16 am

    Thanks for the class. It would be nice if you could also talk about colocalization analysis. Also, could you recommend me a book to learn image processing using Python?

    Thanks again.

    Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Sign up for the Science Communication Lab education newsletter

  • Sign up Newsletter
  • This field is for validation purposes and should be left unchanged.

Privacy Policy

Help us keep bringing the world’s best biology to you!

Footer

Funders

NSF
NIGMS
Lasker
Rita Allen

Start Here

  • Talks for Everyone
  • Talks for Students
  • Talks for Research
  • Talks for Educators

Explore

  • Explore
  • All Playlists
  • All Speakers
  • All Talks

Talks By Topic

  • Biochemistry
  • Bioengineering
  • Biophysics
  • Cell Biology
  • Development and Stem Cells
  • Ecology
  • Genetics and Gene Regulation
  • Human Disease
  • Immunology
  • Microbiology
  • Neuroscience
  • Plant Biology
  • Techniques

Talks by Series

  • Bench to Bedside
  • Famous Discoveries
  • Great Questions
  • Share Your Research Series

Career

  • Professional Development
  • Career Exploration
  • NRMN Resources
  • Biomedical Workforce

Courses

  • Microscopy Series
  • Short Microscopy Series
  • Open edX Courses
  • Cell Biology Flipped Course
  • Engineering Life Flipped Course
  • Evolution Flipped Course

Educator

  • Educator Registration
  • Educator Resources
  • Log In

About Us

  • About Us
  • iBiology Team
  • Wonder Collaborative
  • Contact Us
  • Mission
  • Privacy Policy
  • SCL Financial Conflict of Interest Policy

This material is based upon work supported by the National Science Foundation and the National Institute of General Medical Sciences under Grant No. 2122350 and 1 R25 GM139147. Any opinion, finding, conclusion, or recommendation expressed in these videos are solely those of the speakers and do not necessarily represent the views of the Science Communication Lab/iBiology, the National Science Foundation, the National Institutes of Health, or other Science Communication Lab funders.

© 2023 - 2006 iBiology · All content under CC BY-NC-ND 3.0 license · Privacy Policy · Terms of Use · Usage Policy
 

Power by iBiology