LensTip.com

FAQ

Frequently asked questions

Frequently asked questions
16 July 2010
Szymon Starczewski

1. FAQ

As we are often asked similar questions - on the forum, in mail and in comments under tests and news as well, we decided to start a FAQ section which we are going to enlarge successively. Enjoy your reading!
  1. Why do you use just one specimen of lens, camera or binoculars in your tests?
  2. Why do you test lenses on such an old body as a Canon EOS 20D?
  3. Why don’t you test vividness, colours and bokeh?
  4. Why do results, presented on the Lenstip.com, differ from results presented on other websites or in some photography magazines?
  5. Why are your sample shots so weak?
  6. Why do you discuss equipment all the time, not photography?
  7. Does equipment with a higher score is better than equipment with a lower score?
  8. How do you award points in particular categories of cameras’ tests?
  9. What factor is taken into account when you choose a camera body to test a lens from an independent producer?
  10. Can companies lending equipment for your tests influence in any way the results?
  11. How significant is the influence of score in „darks” category on the final result of a camera test?
  12. Why do other websites’ tests results, concerning dynamic range, differ so much between each other ?
  13. Can I lend my lens/camera/set of binoculars to the Lenstip.com for tests?
  14. Why do Olympus lenses have the best resolution results, Nikkor, Pentax and Sony lenses – average and Canons – the worst?
  15. Can you trust implicitly all the points presented on different kinds of charts and graphs in the tests?
  16. Why do you assess such strange aberrations like the astigmatism? Nobody needs them after all…
  17. Does the autofocus efficiency assessment in the tests of lenses can be extrapolated on other bodies?
  18. Why A company products get always a better score than B company products?
  19. Why did you start your tests from Canon, not from Nikon or other company?
  20. Why are all the Lenstip.com editors Canon/Nikon fans?
  21. Why were some sample shots taken long before the test publishing date?
  22. I counted on the newest X company reflex camera’s test and you’ve just published the test of an older model from Y company. Why?
  23. Can you compare test results of lenses, got on different bodies?

1. Why do you use just one specimen of lens, camera or binoculars in your tests?

For more than one reason. Firstly, we are limited by time and by the market. It often happens that it is very difficult to get even one specimen of a given model, not to mention three or more. In case of some companies it is simply impossible. It would be not fair if we tested just one specimen of a model from one company and several specimens from the other. Apart from that, testing several specimens cause a lot of problems. For instance:
  • which result should be considered representative? The best, the worst or maybe average? What to do in a situation when, say, two specimens fare badly and one –well? Or the other way round? Which test should be published? Should we average the results?
  • testing e.g. three specimens of a given model would mean three times less tests published on the Optyczne.pl / Lenstip.com.

Besides, the practice of testing consequently one randomly chosen specimen has serious advantages because it is also a kind of quality control of a given company’s products. If there are serious doubts and differences between the test results and the users’ opinions concerning two or three lenses’ models out of ten, produced by a given manufacturer (it doesn’t matter what way, good or bad), it reflects badly on the quality control of that company. When there is no rift, the quality control can be deemed satisfactory.


Please Support Us

If you enjoy our reviews and articles, and you want us to continue our work please, support our website by donating through PayPal. The funds are going to be used for paying our editorial team, renting servers, and equipping our testing studio; only that way we will be able to continue providing you interesting content for free.

- - - - - - - - - - - - - - - - - advertisement - - - - - - - - - - - - - - - - -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

2. Why do you test lenses on such an old body as a Canon EOS 20D?

When we started the Optyczne.pl website, a Canon EOS 20D was the most advanced APS-C/DX reflex camera on the market – small wonder we chose it for our tests. Nowadays there are many newer models available but changing to, for example, a 50D camera is not a good solution - it has more disadvantages than advantages. In fact, the only advantage you can think of would be a possibility to assess the work of autofocus on a more modern module, installed in the 50D. If you consequently test on the 20D you can compare directly between new and older tests’ results. What’s more, it allows us to compare the results to those of full frame tests, because an EOS 1Ds MkIII and an EOS 5D MkII feature exactly the same linear pixels density as the 20D. It’s also worth remembering that contemporary reflex cameras with 12-15 million pixels sensors don’t provide such improved MTF50 values as it might appear at first and they don’t demand more of lenses. For instance, the highest results on the 20D reach the level of 44-45 lpmm. The highest results on 12-15 Mpix sensors are near 47-50 lpmm. We see an improvement of a dozen or so percent at most so it would be difficult to imagine a situation when results got on the 20D were contrary to results on cameras with more densely packed sensors.

3. Why don’t you test vividness, colours and bokeh?

In some tests we are able to assess colours of images given by a lens on the basis of its transmission measurements. As our editorial office lacks such an expensive tool as a spectrophotometer we have managed to measure the transmission just for some lenses.

When it comes to the image vividness and bokeh, it would be difficult to measure them objectively. In most cases they depend on geometric and light conditions, not on a lens, so the same device used in different situations can give images deemed by some as good (taking into account vividness and bokeh only, of course) and by others as horrible.

The notions of vividness and bokeh are confused by many people with the notions of the depth of field and vignetting. That’s why you often hear an opinion about mythical vividness of full frame. Such vividness is simply a result of the most shallow depth of field and the highest vignetting.

In the case of bokeh, as I wrote above, the conditions in which a photo is taken are the most important factor. It is true that particular lenses can differ when it comes to the quality of background blur, equated with bokeh. That quality is influenced by the shape and the number of diaphragm blades in an aperture, the degree of spherical and chromatic aberration correction and other off-axis aberrations like the coma and astigmatism. The Lenstip.com is the only website in the world which describes consequently all these aberrations in all published tests so actually you can find some competent info about bokeh only here.

4. Why do results, presented on the Lenstip.com, differ from results presented on other websites or in some photography magazines?

It is difficult to answer such a question because hardly any rival website or magazines presents as clear and lucid explanation of its tested methods as the Lenstip.com. We are the only website in the world which performs tests using unsharpened TIFF/PNG files, converted by the dcraw program (which code is openly known and analyzed by us) from RAW format, on test charts of four different sizes so we can assess how the tested device performs at different distances.

Most websites and magazines use one test chart only (just to save time) and sharpened files (often compressed JPEG files with loss of data) in their tests. It is also worth noticing that many magazines and rival websites lack professional, scientific background when it comes to optics tests so it happens they don’t understand the results they deal with. That’s why you can see e.g. distortion tests depending on aperture, which some time ago could be admired in one of Polish photography magazine and now they are also published on one of Polish internet sites. That’s why the dpreview.com publishes lens tests results, in which practically every device reaches MTF50 values exceeding Nyquist Frequency for many aperture/focal lengths combinations. Such values are against the laws of physics and they are a result of analyzing oversharpened files. The commentary, added by the dpreview.com under MTF50 values graphs in green, tells us that “whenever the measured numbers exceed the Nyquist Frequency value it simply indicates that the lens out-resolves the sensor at this point” is a proof that somebody, who tests optics there, doesn’t understand the subject completely and is rather incompetent. The MTF50 value, measured as it should be done – using unsharpened RAW files and combining MTF’s of a lens with those of a sensor - cannot be higher than the Nyquist Frequency of the sensor itself.

5. Why are your sample shots so weak?

None of the Lenstip.com editors is a professional photographer (well, if being a professional photographer means having a press card and earning your living by taking photos then yes, there’s actually such a person). All the members of our editorial team are people with scientific and/or technical background (Bachelor, Master or PhD degrees in physics, electronics or computer science from the University of Warsaw, the University of Wrocław, the University of Poznań or Gdańsk Technical University). The scientific competence of an editor, the ability of conducting tests and analyzing raw material, the knowledge of theory and practice of measurement error calculation are much more important to us than the ability of taking photographs which are artistically excellent. In other words the Lenstip.com is a website about equipment first, and about photography later. After all if you want to check the performance of optics specialists and engineers from a given company, you must employ people who are competent enough.

What’s more, the photos presented here are chosen not for their beauty. Very often they are taken using such parameters which are necessary to emphasize or highlight flaws of the tested equipment. That’s why you often see trees against the background of light sky (testing chromatic aberration), scenes/buildings with plenty of sky in the frame (coma and astigmatism) specific shots of architecture (distortion) or pictures of statues against the background of greenery (the off-focus image quality).

A photographic studio session with a model, organized by us on a regular basis, is a good example how our photos differ from typical sample shots. A typical photographer usually has one body and 1-2 lenses at his or her disposal; for most of time he or she places the model and the lighting in required positions trying to take as many good photos as possible and then he/she chooses just the best shots from the whole session which are going to be processed intensely afterwards. We are not interested in the model’s pose or the artistic value of a picture; the thing that really counts is to get enough shots for each focal length, aperture and ISO sensitivity setting with every camera or lens available during the test. If we take just a few photos we’ll find complaints about too small number of sample shots for a given parameter’s value in the first comment under the text. Taking into account the fact that during one session we usually test 5-6 cameras/lenses, there’s always plenty to do, especially that you have to pay attention to many things at the same time – as we always publish “raw” photos, nothing can be improved or changed by processing later.

It is also worth noticing that all sample photos are taken with the noise reduction switched off and sharpening set on minimum or low, depending on a given camera, in order to make the comparison between them easier. As most of users don’t set the sharpening so low in their cameras our sample shots might seem weak to them.

6. Why do you discuss equipment all the time, not photography?

Because the Lenstip.com is a website about equipment. It is aimed at people fascinated by optics, technology, electronics and their technological progress. As there are websites and magazines directed at old cars or jeeps fans Lenstip.com is for optical and photographic equipment fans. Although we fully appreciate the fact that a well done photo is in 80% the matter of a good photographer and only then the matter of equipment and conditions, it doesn’t make us less eager when it comes to optical devices, digressions about new purchases and buying newer and newer products although the old ones are still in working order. Such is our hobby and that’s that.

7. Does equipment with a higher score is better than equipment with a lower score?

Generally speaking yes, but it doesn’t have to be this way in every particular case. The final score of a camera or binoculars is composed of partial scores, awarded in many different categories. Their scale was chosen in such a way so it can meet demands of an “average user”. It’s always good to have an individual approach to the tests, though, because each and every one of us might differ from that “average reader” and pay attention to different factors. For instance, if a pair of A binoculars fared slightly better than a pair of B binoculars just because the B binoculars didn’t feature a tripod exit, although its final score in the test was worse it might be in fact a better choice for somebody who hasn’t had any intention to ever put such a device on a tripod. We encourage you to be incisive during the lecture and to analyze particular test categories –to construct your own rankings and attach your weights to particular results, important just to you.

8. How do you award points in particular categories of cameras’ tests?

In the case of compact cameras and reflex cameras’ tests we award points taking only arithmetic mean or weighted mean into account. Every subsection of a test is measured by just a camera’s performance in this area. Next, we calculate arithmetic mean for the score reached in a particular chapter using appropriate weights. In this way we get a result for each chapter. The final camera score is the result of calculating arithmetic mean of all scores reached in all seven chapters.

In the case of reflex cameras we award points in over 20 categories but using smaller constant weights while assessing such things as appearance, switches, JPEG files resolution or darks among others. We deliberately favour the most important features of a camera like the image quality, autofocus, or dynamic range. Despite this practice it sometimes happens that, after calculating arithmetic mean, a camera might get a high final score just because it was awarded a lot of points in the first parts of the test so, as a result, it gets a similar score as another, better camera which produces images of a higher quality. Unfortunately you can’t avoid it because the average simply works that way.

Every user has different preferences, though. One cares more about casing and look, the other- about movies, the third – about image quality. It is important that you are aware how the calculating process of average works. You are always encouraged to create your own marking scale, adjusted to your own needs when it comes to a particular camera – the scale used by the Lenstip.com is intended to give you just an overall assessment.

9. What factor is taken into account when you choose a camera body to test a lens from an independent producer?

At the beginning all lenses produced by independent manufacturers were tested on a Canon 20D because it was the first camera owned by our editorial team. Currently we have bodies from practically all major companies at our disposal and it doesn’t matter to us which one we test a given lens on. In most of cases, though, the availability date of a given mount is predominant. We try to test new lenses as soon as possible so we grab them with such a mount which is launched first on the market. Sometimes there are exceptions to this rule, especially when most of lenses of a given class has already been tested on one particular mount. It’s worth being consequent in such a situations, even if it means a delay, because later the comparison between results is easier.

10. Can companies, lending equipment for your tests, influence in any way the results?

Definitely not. There’s just one way of getting a good result in our test – producing an instrument of good quality. The company, which lends us some equipment, never gets the text before its publishing and cannot influence the results. On thing that can be influenced by the companies, promoting themselves on our website, is the number of tests. There is a very simple correlation – the more a given producer promotes itself the more tests of its products we publish on the Lenstip.com. Of course we also test equipment produced by companies which don’t advertise on our website. We are optical equipment maniacs so we are interested in performance of all products from all the companies on our market.

11. How significant is the influence of score in „dark frames” category on the final result of a camera test?

“Dark frames” mark is a partial score in the “Noise and RAW image quality” chapter. The comprehensive score of this chapter is influenced by partial scores from such subsections as: RAW file quality, RAW noise and “dark frames”. First two categories are given 1.0 weight, the “dark frames” category – just 0.7 weight. It happens very often, then, that even a drastic change in the “dark frames” category doesn’t change the final result of a reflex camera or change it by 0.1 point.

We realize that people not interested in photographs with long exposition time (night photography, astrophotography) automatically won’t be interested in this category result. The aim of our test, though, is to answer as many questions concerning image quality as possible and describing any application you can think of. After all, some part of our Readers think the result in this category is important and we don’t see any reason why we should gloss over it and/or remove it from our scale. The fact that other photographic websites don’t take “dark frames” into account is an argument for, not against assessing equipment in this category. Especially that a detailed analysis of the “dark current” can answer many questions, like those concerning the degree of a producer’s interference into RAW files, the way the heat is channeled outside the sensor and what limitations we can encounter working with exposition times as short as a few seconds and as long as many minutes.

12. Why do other websites’ tests results, concerning dynamic range, differ so much between each other ?

Most of the dynamic range assessments is based on transmission step charts with areas of different greyness degrees being located by 1/3 EV apart from each other. The picture of such a chart is visually inspected and assessed when it comes to the range. Such a procedure can give unique results, though. One person might see a clear border between particular zones, the other might just see a blur. The image sharpening process can complicate it even more, adding new borders between the zones, non-existent in the original picture; it might artificially widen the range. The whole process of dynamic range measurement should be based on the image files in RAW format and expressed in exposure values, the inverse of the signal-to-noise ratio. It’s worth remembering, though, that the same image can have a wide dynamic range if you keep the signal-to-noise ratio low or a narrow dynamic range if your requirements concerning the quality of signal are high.

13. Can I lend my lens/camera/set of binoculars to the Lenstip.com for tests?

If it is only possible we try to avoid testing second-had equipment. We do so only in exceptional circumstances, when we know where the equipment comes from and we are sure neither its age nor origin nor wear and tear will influence the results.

14. Why do Olympus lenses have the best resolution results, Nikkor, Pentax and Sony lenses – average and Canons – the worst?

It’s an effect of testing lenses on different bodies. The Canon 20D has the smallest number of pixels so the maximum results, that a lens can reach on it, exceed slightly the level of 44 lpmm. Lenses tested on 10-megapixel sensors of a Nikon D200, a Sony A100 or a Pentax K10D achieve maximum results of 47 lpmm. Olympus’s sensors are the most densely packed with pixels so in tests conducted on an E-3 we can see results exceeding even 50 lpmm. Small wonder, though, because 10 megapixels of an E-3 on a small format 4/3 sensor gives the same density as 16 million cells on a DX format sensor.

15. Can you trust implicitly all the points presented on different kinds of charts and graphs in the tests?

No. All physics measurements are burdened with measurement errors – both statistical and systematic. You can minimize these errors and we try to do it, but you can’t avoid them completely. It’s also worth remembering that you can’t solve this problem by simply stating a value and its margin of error. The result of 40 lpmm ± 1 lpmm, according to the statistics (so-called 1-sigma) means that we are dealing with a 68% probability that the real value, measured by us, ranges from 39 to 41 lpmm. Still, there are 32% confidence that the value is actually lower than 39 lpmm or higher than 41 lpmm. You can also use 2 sigma or 3 sigma error measure criteria to increase the probability of the right value assessment. In our example, then, you can be 95.4% sure that our value is within the 38-42 lpmm range and 99.7% sure that it is within the 37-43 lpmm range. All the same, there is a 0.3% probability that the value is somewhere outside that range. In other words 3 measurements out of 1000 will differ from the real result by over 3 times the given margin of error . Taking into account the fact that on our graphs of lenses and cameras there are over several thousands of measurement points presented, the statistics makes us practically sure that more of a dozen of them differ a lot from the real value.

16. Why do you assess such strange aberrations like the astigmatism? Nobody needs them after all…

Off-axis aberrations like the astigmatism can have a very serious influence on tests results if the tests are conducted by somebody not taking them into consideration or not knowing anything about them at all. A lot of rival magazines or websites use only vertical or only horizontal borders of black and white to measure resolution and, in the case of some lenses, it might distort the results hugely. A perfect example is a Sigma 30 mm f/1.4 EX DC HSM which problems with astigmatism we described here.

17. Does the autofocus efficiency assessment in the tests of lenses can be extrapolated on other bodies?

No, it can’t and shouldn’t be done. A score in every “Autofocus” chapter concerns only the tested lens-and-body set. It is usually the case that a given lens will work better on high-end bodies from a higher price segment and worse on cheaper devices of lower quality but it is not a strict rule either.

18. Why A company products get always a better score than B company products?

If it is really the case, it simply means that A company makes better products than B company. The Lenstip.com tests treat all companies and their products on an equal footing. In the line-up of every company (but it is of course truer for those which products have been tested in sufficiently high numbers) you can find a product with a very high score and a product that was criticized a lot. The testing team have no particular preference before testing. After the test you can warm to a given item or start to dislike it depending to its performance. It is only too natural that you express those feelings in the summary. Every member of the testing team has access to equipment produced by most companies present on the market on a daily basis. If they want to take a photo they are guided not by a camera’s or a lens’s brand name but by their possibilities and usefulness to the task at hand. Believe me, if you face a choice of about 10 bodies and several dozen lenses the least thing you pay attention to would be the producer’s logo and your liking for it.

19. Why did you start your tests from Canon, not from Nikon or other company?

The Optyczne.pl website was being created in the second half of 2005. At that time the only body on the market, which allowed to test optics without any limitations and was affordable to us was a Canon EOS 20D. Nikon, Konica-Minolta and Pentax offered main bodies with 6-megapixel sensors and we considered that they didn’t meet our expectations as that value was too low. In November 2005 Nikon announced the launch of a D200 model which is also an excellent body for optics testing. It hit the shelves in Poland only in January 2006 though, and at that time our tests on a 20D were in full swing.

To be absolutely fair, the first full frame body for optics testing our editorial office was equipped with was a Nikon D3x.

20. Why are all the Lenstip.com editors Canon/Nikon fans?

Privately the Lenstip.com editors have used equipment manufactured by very different companies. One of them takes photos with a Canon reflex camera, the other owns a Pentax. One of them has been shooting with an analogue Minolta camera for many years the other has had a Praktica. There’s also one person who doesn’t feel like owning any reflex camera at all - the company equipment is fully available to him and he might choose a device he needs at the moment, produced by a company which equipment is the best-suited for a given task.

21. Why had some sample shots been taken long before the test publishing date?

There’s no conspiracy theory concerning this fact. It often happens that we order several lenses or sets of binoculars for a longer test session. Most of their test we manage to publish soon afterwards but interesting novelties might appear on the market meanwhile and they might push already tested but not worked out instruments to the back. When a lens or a pair of binoculars loses its place in the queue it is sometimes difficult to put it back on the agenda. As a result, the record holder device has been waiting almost two years for the publication of its test. We do try to catch up (but still several tests have been placed in the drawer, waiting for better times) so don’t be surprised if we publish a test with strangely old sample shots from time to time.

22. I counted on the newest X company reflex camera’s test and you’ve just published the test of an older model from Y company. Why?

We should start with the fact that it takes about 3 weeks to test one reflex camera. In some special circumstances that period of time can be shortened but only at the expense of allotting more editors to work on a given device’s test. It also happens that testing takes even longer as not all the editors, employed by us, work full time.

Now, please, imagine such a situation: a Polish distributor gets the first specimen of a product from a new production line. They can lend it to us for 3 weeks and then see its test published on our page. During the same period of time, though, that device can be tested by 3-4 other editorial teams - the distributor will have more tests published and they can profit more from such a situation. It’s not so hard to guess which option will be chosen, then.

The next problem is the planning of the editors’ working time. It often happens that an editor ends one test and reports that he needs some new equipment for the next one. In that particular moment none of the newest models might be available to us – sometimes you have to wait as long as 1-3 weeks for a device. In order not to waste time we often decide to test an older camera model and keep our editor busy but sometimes it entails a delay in testing the newest model. We are prepared to take that risk, though, because we don’t like stoppages in our editorial office.

23. Can you compare test results of lenses, got on different bodies?

Yes, to some extend. Te tests results, presented on the Lenstip.com website, are comparable because of implementing a well-thought-out procedure, based on the unsharpened RAW files analysis. MTF50 values, which any tested lens reaches in our tests, depend not only on its optical properties but also on the sensor it was tested on. Still some appropriate comparison can be done because of a test analysis, in which one and the same device is tested on several platforms. It was done so in the case of a Sigma 30 mm f/1.4 EX DC HSM or a Canon EF 100 mm f/2.8L Macro IS USM. The performance of the best fixed-focus lenses, tested on a given sensor in the range of apertures and free (or almost free) from the influence of any optical aberrations is also an additional source of information.

Let’s get down to the facts then. The oldest tests on the Optyczne.pl were performed on an 8-megapixel sensor of a Canon EOS 20D. In its case the best “primes” reach results near 45 lpmm and when you stop them down to f/16 the resolution decreases to the level a tad above 30 lpmm. That last value is deemed to be the decency borderline.

CCD sensors, having 10 million pixels, are a bit more densely packed than the 20D matrix; we find them in such cameras as a Nikon D200, a Sony A100 and a Pentax K10D. All these cameras have been used in our lenses’ tests and all of them reach in fact the same results (in the case of the K10D it concerns just one coordinate, though). The best primes get as high as near 48 lpmm on them and if you stop them down to f/16 you get results at the level of 32-33 lpmm. The 10 Mpix CCD results are by circa 1.07-1.08 scale ratio higher than those from the Canon 20D

A 14 MPix CMOS sensor is even more densely packed than a 10 Mpix CCD and we deal with it in case of a Pentax K20D or a Samsung NX10. When it comes to the K20D the increase of pixels density didn’t result in the increase of the MTF50 values. One of the sharpest Pentax system lenses, a 1.8/77 LTD model, reached here 48.5 lpmm. The results of lenses tested on a K20D are directly comparable to those from a 10-megapixel CCD sensor, and the scale ratio for the 20D remains the same – about 1.07-1.08. A Samsung MX10 features a new sensor implementation, analogical to that used in the K20D so the MTFs which you can get on it are a bit higher and might reach even a bit above 50 lpmm.

A 15 Mpix CMOS sensor from a Canon 50D, also used for optics tests, is the next when it comes to the pixel density. In its case the sharpest lenses have results at the level exceeding 52 lpmm and by f/16 they reach 34-35 lpmm. These latter values are the decency borderline here. On average the results on the 50D are by 1.15 scale ratio higher than those from the 20D and about 1.08 higher than those we get on the D200.

The next body, used in our optics tests, is an Olympus E-3. Although its sensor consists of 10 million of pixels, its cells are more densely packed than in the case of the Canon 50D due to a smaller 4/3 format detector. Fortunately MTF50 results reached on the E-3 and on the 50D are practically identical so they are directly comparable.

The most densely packed sensor, used in our tests, is a 12-megapixel LiveMos from an Olympus E-P1 (it is an equivalent of 20.3 million of pixels on a DX sensor). So far we’ve conducted only two lens tests on it so it would be difficult to write anything binding here. It seems, though, that the best lenses can obtain results exceeding even 54-55 lpmm because of its high pixels density and a relatively weak AA filter; the results will be by several percent higher than those from the E-3 and the 50D. In case of the E-P1 the decency level oscillates near 37 lpmm.

Full frame performance is just another story. Fortunately, a Canon EOS 1Ds MkIII, used by us, features the same pixel density as the Canon 20D so it allows us to compare the results almost directly. I wrote “almost” because the newer detector seems to use its pixels more efficiently and in its case the sharpest “primes” have MTFs a bit over 46 lpmm. The frame centre performance graph is shifted upwards by about 1-2 lpmm compared to the same graph from the 20D.

A very similar situation (but opposite) can be noticed in the case of a Nikon D3x and a Sony A900/A850. These two cameras are equipped with 24 Mpix FF sensors with the pixel density a tad higher than in the case of the Nikon D200 or a Sony A100. Despite this fact their MTF50 values are by about 2 lpmm lower than those for a 10-megapixel CCD sensor.

At the very end there is a Leica M9 left. It is a full frame camera with 18 Mpix sensor. It generates higher MTFs than 21-24 Mpix full frames from Canon, Nikon or Sony because its constructors dispensed with an AA filter. Luckily maximum results reached on the Leica M9 are at the same level as these from the Olympus E-3 or the Canon 50D so once again you can compare them easily.

It’s worth remembering, though, that all these conversion factors are nothing but estimates. A transmission from one system to another cannot always be linear - statistical and systematic errors must be also taken into account. The median of these latter is about 0.5 lpmm and the former can be assessed at about 1 lpmm. It means two MTF50 graphs for one and the same lens, tested on the same body twice but at two different moments in time, might be shifted in extreme case by even more than 1 lpmm.



Previous chapter Next chapter