« Open Mike: The Rolling Stones, and Why I Probably Won't Be Getting a New Camera | Main | How to Cure the Digital B&W Nasties »

Wednesday, 12 July 2017


Feed You can follow this conversation by subscribing to the comment feed for this post.

Oops...dyslexia...I used 364 instead of 864 in my previous area measurement. So it works.

A full frame sensor has 3.8 times the area as a Micro 4/3 sensor, so with the same pixel density a full frame sensor would have 20MP*3.8 = 76MP.

Source: http://photoseek.com/2013/compare-digital-camera-sensor-sizes-full-frame-35mm-aps-c-micro-four-thirds-1-inch-type/


I inadvertently used the comments as a scratch pad...sorry

The quick answer is 80 MP. With a 2X crop the Micro 4/3 sensor has 1/4 the area of full frame. Unless there are subtleties I'm not aware of.

76.8 Megapixels

Area of full frame sensor Divided by area of Micro Four Thirds Sensor Times 20 Megapixels


(864 mm / 225 mm) * 20 Mp = 76.8

71MP, assuming MFT is 18x13.5mm and FF is 36x24mm.

I am slightly confused about sensor size vs imaging area: MFT seems to either be 18x13.5 or 17.3x13 and I am not sure what FF really is (although I think 35mm film really is 36x24 depending on the camera obviously.

If you assume the smaller area it is about 77MP.

The general formula is

C = ((H*W)/(h*w))*c

where C is pixel count on the larger format, H, W are the dimensions of the larger format, c is the pixel count of the smaller format & h, w are the dimensions of the smaller format. This works for any two formats of course.

20mp / (18mm * 13.5mm) = 82,304 pixels/mm^2

36mm * 24mm * 82,304 pixels/mm^2 = 71.1mp


I figure the full frame sensor would have 77 megapixels if the pixels were the same size as on my Lumix GX8. This is based on the calculation that a FF sensor is 864 square mm and the micro 4/3rd has 225 square mm.


Isn't M4/3 exactly half the size of full frame?

A 20MP 4/3 sensor is 3872 pixels high spread over 13mm, or ~298 px/mm. A full frame sensor is 24mm high and at the same pixel pitch would be 7152 pixels high. At a 4/3 aspect ratio that would be 7152x9536 or ~68MP. At the normal FF aspect ratio of 1.5 it would be 7152x10728 or ~77MP.

Assuming the dimensions of a m4/3 sensor are 17.3mm x 13mm, we multiply the two to get an area of 224.9 sq mm. If we spread 20 MP over that area, we get 20 MP divided by 224.9 sq mm, or roughly 0.0889 MP per sq mm.

Now assuming the dimensions of a FF sensor are 24mm x 36mm, we multiply the two to get an area of 864 sq mm. Since we want the same pixel density, each sq mm would get roughly 0.0889 MP, so 864 sq mm multiplied by 0.0889 MP per sq mm would come to around 76.8 MP.

As a follow-on comment I've been recently using a Sony RX100V with a 20MP 1" sensor (8.8mm x 13.2mm). If we scale that to full frame with the same pixel pitch we would be ~148MP. The image quality at base ISO on the RX100V is amazing, especially when you consider the pixel pitch.

That would let them build one heck of a versatile camera. Extra reach in a crop mode (though likely to be lens limited). Extra clean if pixel averaged to 36MP. Or high res for large detailed images. I will be surprised if we don't see such a camera in the next decade.

I really don't have a clue, so I will have a go. Micro 4/3 sensors are 225 sq mm; the full frame sensor is 864. That is 3.84 times larger. If the full frame sensor density should be the same as the 20 MB 4/3 sensor, then it needs to have 3.84 times more pixels, or about 77 MB.

Is my mind too simple to solve the problem, or was the problem so simple?

Oops, should have googled first. It is half the diagonal of full frame, so Pythagorem theorem gives me 43.27mm for the diagonal of full frame. Half of that is 21.635 for M43. The surface area of full frame is 864 square millimeters (24mmx36mm). Now , I will cheat because figuring out the width of the M43 sensor from the diagonal and the aspect ratio is beyond what I'm willing to do with math. I looked up the width as 17.3mm. So with the Pythagorean Theorem, I calculated the height to be the root of 168, which is approximately 12.96. This makes the surface area of M43 about 224 square millimeters. Dividing the surface area of full frame by the surface area of that of M43 gives me a factor of approximately 3.85.

So 20mpx3.85. I'll round that up to an 80mp sensor for full frame with the same pixel density as a 20mp M43 sensor.

That was a lot of work, and this is with leaving out calculating the width of the sensor from the diagonal and the aspect ratio, which is mathematically feasible but probably beyond what I remember personally from high school.

About 77MP. The 51MP 5DS has pixel density equivalent to a 13MP m43 sensor. And a 24MP APS-C has pixel density equivalent to a 14MP m43 sensor.

But the 20MP 1" sensors in the RX10s would require a 148MP FF sensor (or a 39MP m43 sensor) to match the pixel density.

(If anyone came up with different numbers - I found the linear pixels/mm for any given sensor, then multiplied by the sensor dimensions to work around aspect ratio differences).

I've always done it this way:
m4/3 sensor area is 225 sqmm
FF is about 860 (24x36 = 864 but many are 23.9mm high)
860 / 225 = 3.82 times the area of m 4/3
If pixels were the same size....
20 x 3.82 = 76,4
So 76.4 MP would fit in the larger area.

But you probably wouldn't do that because there is so much benefit from larger pixels, you would use fewer larger ones.

Linear density of that 20MP Micro 4/3 sensor is around 2980 px/cm (7570px/inch), ergo the same pixel density FF sensor will have around 76.7 MP.

I don't think the calculations using ratios of area are correct since FF and m43 do not have the same aspect ratios, so you'd be assuming non-square pixels. The 68MP calculation using pixel density is correct, but who makes a 32x24 mm^2 sensor anyway?

(But such a sensor exists, macro 4/3 might be a cool name for it!)

@ Berndt
m 4/3 has a crop factor of 2, which is based on the fact that it's long dimension is 17.3 or roughly half of 36, which is correct.
But the AREA of FF is 3.8 times larger than m4/3, so a larger number of tiny pixels would fit.

Well, the calculation is easy enough - it just scales by area (so, as others have observed, 76MP for full-frame, and roughly 130MP for the new Fuji and Hasselblad mini-medium format cameras).

However, try getting a sharp 76MP hand-held image without the amazing stabilisation provided by current micro 4/3 bodies. I routinely shoot both an E-M1.2 and Leica M 262 side by side, and usually the Olympus files are sharper.

This is partly because of the extremely precise and reliable AF, and partly because of the image stabiliser. But the files are so sharp that I wonder if Olympus is also applying some kind of sharpening algorithm to the RAW files.

Something to consider in all this MP math is format.

For example, a Fuji 24 mm sensor, cropped to the squarer 4/3 format, same height, is 21 MP, in practice indistinguishable from the 20 MP µ4/3 sensor.

So, for someone who shoots portraits without rotating the camera 90°, someone who shoots for 8x10 print proportions, one who just likes squarer formats, etc., they have the same resolution.

Why do you ask? Do you want a full-frame sensor with tiny tiny sensors crammed onto it in the same manner they are in a micro-4/3rds camera?

As others have said, if you work out the relative pixel density it would map to 76.8 MP on FF.

20MP * (36*24)/(17.3*13)

However, the image circle based on sensor diagonal is 2X as big. Stopped down to the equivalent DOF, the FF lens would have less resolution. The Rayleigh limit at this pixel density is only about F7.1.

ie. if you use a crop sensor, use a lens with the appropriate image circle.

Is it an African or a European sensor? Laden or unladen?


Mike says "I don't know if this gets us any closer to the question of whether a smaller sensor overcomes its deficiencies to be better for long-telephoto work or not"

If you can carry and use a big lens on a big sensor, then theoretically you can take better images, but between a cropped from a high res larger sensor and a high res small sensor, I'd bet on the small sensor, because the lens designed for the small image circle is likely sharper and contrastier. Birders are paying $1000 for the Nikon 1 70-300 rather than adapting an F mount lens because the '1' lens is much sharper (for a given area). I imagine that's the case with m43 teles, too ... a Canon FF tele might be brilliant, but crop it down to 4/3 size and I'd place odds on the native m43 lens.

So I guess the general idea would be to decide what effective focal length you need to fill the frame, decide how big a lens you're willing to buy/carry, then figure out what size sensor you need to get from that lens to the desired FOV.

Rather than performing the number crunching comparison excercise, this is how I would make my decision. Going back to film days would you choose medium format film and camera/lens system if in fact you could get a long telephoto in medium format or would you choose 35mm film/camera/lens for bird, wildlife, etc. Today in digital I would most definitely choose M4/3 for any long telephoto work and remember back in the film days a 300mm lens was considered the minimum for bird or wildlife photography.

I suspect I'm just behind the times, but I have always considered pixel size to be important. I have a selection of new and old cameras and I still see something different and appealing in the images from the older cameras with lower pixel density (same size sensors).

I recognize the older cameras have ISO limitations, but are capable of excellent IQ otherwise. Is pixel pitch inconsequential?

(As you can tell, I'm not convinced more is better -- and rarely print larger than 11x14).

If one is shooting at base ISO and the resolution of the smaller sensor is sufficient, then the smaller sensor wins since the one can save on weight, size on money.

If, OTOH, one needs maximum resolution the larger sensor wins. Also, if one shoots in low light and the equivalent apertures of the lenses are the same, the large sensor wins.

These naturally describe best case scenarios where one uses the best bodies and lenses. Failing that the comparison changes. Size and weight should not be dismissed though, it's hard to move around with heavy packs and tripod shooting is not as agile as hand held.

Regarding Eliot Porter'so choice of tools...

I doubt that "sensor" size had anything to do with his choice. Unless he was making photographs of stuffed birds or birds that have been stapled or gaffer taped to their perch*, the large format camera is simply too cumbersome. Not to mention the lack of appropriately long lenses.

* My friends have accused me of both at times!

If you are photographing small things like birds in flight, pixel density is a big help . Thats why both Canon & Nikon put versions of their best AF systems in the 7D mk II and D500. They are very popular cameras for that purpose.
If you already have the lenses, FF &APS c make a very strong combination.
If you don't already have the lenses, Cameras like the OLY take it one step further. I'm sure they are great for BIF.

If Eliot Porter had wanted the same diagonal view angle on a 4x5" camera as he could have had with a 500mm lens on a 35mm camera, he would have been shopping for a focal length of roughly 1900mm (over 6 feet!). To cover the 4x5" frame this would have needed heavy equipment to manipulate, not to mention the cost!

You know, it was when people all over the world started going mental about "aperture equivalence" that I decided never to shoot digital again. If you start a "pixel equivalence" craze, well...

I'd be interested in knowing what tests people have done comparing cropping into a D800 or equiv to the 2x portion of M43? Meaning, if you took that center portion of the D800, do you like it better than the M43 file?

John Gillooly

Some bird photographers compare setups by calculating PPD - pixels per duck. In other words, if I photograph the same small distant object and crop as required, how many pixels am I left with. Seems like a good, real world way of comparing the effective reach.

It is not directly related to your question, for which I apologize in advance. My understanding is that, the larger the pixel, the better is the sensor light gathering capabilities. So I see no use of a full frame sensor matching m4/3 pixel density, in terms of image quality. Or am I totally off? (It's an honest question from a Olympus EM-5 user)

Same pixel density on the sensor?
Or same pixel density on the subject? (which seems more relevant to many people)

If you care about the subject then you need to be comparing focal length in this as well.

Though I think we can all agree that the 4/3 sensor's pixel density is higher, the math would also dictate that the each one is about 1/4th the size hence gathering less light and more susceptible to noise. Therefore, ultimately, how does that affect acuity of the image and would FF really need to be 80 MP to achieve the same acuity? Personally, I haven't a clue; perhaps someone with a better grasp of optics and physics can enlighten me.

If your lens is a 400mm f/6.3 with a 20 MP micro 4/3 camera then you’re already against the diffraction limit wide open. Here is a useful calculator (not mine, just found it): https://www.pointsinfocus.com/tools/diffraction-limited-effective-resolutions . It seems trustworthy as the numbers stack up rather well with diffraction-limited empirical lens resolutions of the best lenses and sensors at photozone.de (see e.g. photozone lens test numbers for the 90mm Sony macro on the A7RII at f/11). Using this calculator, the “effective” maximum resolution of an f/6.3 m4/3 lens (green color, 550 nm wavelength) is about 13 MP wide open while the competing 800mm f/5.6 on full frame would be limited to about 51 MP. I don’t have a horse in this race--the only telephoto lens I own is a Jupiter-9 (85mm) used a few times a year on full frame. As the old saying goes, 4 MP should be enough resolution for everyone, so this diffraction-limited loss might not matter to most birders or sport photographers.

I had this conversation with an Olympus rep at a UK photography show a few years ago. She claimed the m43 sensor was half the size of full frame, rather than approximately a quarter. Presumably that statement was based on the 2x crop factor. APS-C works out at around 40% of the area of full frame, which is probably a smaller difference than most people realise.

Correction. The "smaller difference" I was thinking of is the comparison between m43 and APS-C sensor sizes.

Replying to Mike/humble host: I doubt Porter would have wanted to lug around a telephoto lens for 8x10 equivalent to the reach of a long telephoto for 35mm.

>>Eliot Porter, the great nature photographer, shot mostly with an 8x10 but did his bird photography with 35mm.

You're overthinking this, Mike. The most likely reason was that long telephoto lenses were available for 35mm cameras and not available for large format sheet film cameras. Can you imagine how huge a 400mm equivalent lens for 8x10 format would have to be?

Re, I don't know if this gets us any closer to the question of whether a smaller sensor overcomes its deficiencies to be better for long-telephoto work or not;
As with everything else in camera design, it's a tradeoff.
There is a clear loss of quality as pixel size goes down , but development has favored smaller sensors lately so the tradeoff is getting smaller, but still there.
It also depends on your requirements, horses for courses as they say.
I would think that the latest m4/3 sensors and a camera & lens system like the Oly would provide excellent results for the vast majority of photographic requirements.
That is not to say completely equivalent to larger sensors/ larger lenses, but very good.
If you want state of the art quality, bigger pixels help.
All that matters is that it works for you.

How can there be many different answers? Oh you mean all but one are wrong!

You have a GX8 so let's assume that it is the 20 MP m43 sensor that you reference. It has 5184 pixels across its 17.3mm sensor width, so the pixel size is 3.3372 microns. You can fit 10,787 of those pixels along a 36mm long sensor, and 7192 along it's 24mm height, for a total of 77,580,104 pixels.

Also re: Eliot Porter's choice of format - I believe a 400mm lens of any type will make the same-SIZED image on FILM. i.e., a bird that fills a 24x36mm frame will also be 24x36mm in size in an 8x10 image - just a much smaller part of the total scene. Theoretically, one could crop the bird out of the 8x10 at the same physical resolution as Tri-X on 35mm for an equivalent result. But,,, portability/usability issues are the problem.

It's simple, a GX8 has 5184 by 3888 pixels on a 17.3 by 13 mm sensor. Taking the wide dimension, 5184 pixels / 17.3 mm = (about) 300 pixels per millimeter (ppm) pixel density. A full frame sensor is 36 by 24 mm. 36 mm x 300 ppm = 10800 pixels, 24 mm x 300 ppm = 7200 pixels. So full frame at the same pixel density as a GX8 would be 10800 by 7200 pixels, 77.76 MP.

You got more exact answers, but for a quick approximation: You certainly know that the difference between m4/3 and APS-C is about one stop, and the difference between between APS-C and full frame is another stop. So m4/3 and full frame are two stops apart. That is a factor of two for lengths (e.g. aperture) and a factor of four for area (and hence shutter time). So at the same pixel density a full frame sensor would have about four times as many pixels as an m4/3 sensor.

A short digression on a famous related mathematical question:

The wording of the posed question suggests that areal rather than linear density is what is under consideration.

Assuming unit density, the accuracy of approximating, by the area of a region, the number of points having integer coordinates and lying within the region is highly dependent on the region's shape, and in the case of a disk leads to a fascinating and famous mathematical question:

Estimate, as a function of R, the difference between the area of the disk of radius R centered at the origin, and the number of lattice points with integer coordinates (positive, negative, or zero) that lie within it.

This problem interested Gauss, who showed that the difference is dominated by a multiple of R, which for large R is considerably smaller than the area, which increases quadratically with R. Much later, the error exponent was improved to R to the power 2/3, and still later, by R to a power slightly smaller than 2/3. Later the English mathematician Hardy (cf. the movie "The Man who knew Infinity") did work, with possible input from Ramanujan, suggesting, but not proving, that any exponent greater than 1/2 would work. Hardy also showed, however, that 1/2 itself does not work, so a valid exponent cannot be less than or equal to 1/2. The problem of unravelling the mystery of the exponent for the error estimate is known as the Gauss circle problem, and remains, along with the Riemann Hypothesis, one of the most famous unsettled questions in mathematics. The conjectured result is that any exponent greater than 1/2 is valid.

I hope this link can help more, and get some clarification to the question of Gary. Kindly check
this https://www.ephotozine.com/article/complete-guide-to-image-sensor-pixel-size-29652. Thank you!

The comments to this entry are closed.



Blog powered by Typepad
Member since 06/2007