Frequently Asked Questions
MSU Graphics & Media Lab (Video Group) Frequently Asked QuestionsA: We used metrics, which are implemented in MSU Quality Measurement Tool. In that tool we implemented objective comparison metrics that are most commonly used.
The main metric in our comparison is PSNR, because it is used in most objective comparisons, so our results will be understandable for everybody. We will increase the number of metrics in next comparison in spite of increasing measurements time.
Q: Did you try to find mistakes in your comparison?
A: Of course we did. Before the comparison start we found reviewers for our comparison. They got draft of our report the month before public release. In exchange they send to us list of comments and mistakes in our comparison. Such exchange significantly decreased number of mistakes in our report.
Q: How did you verify objective measurements?
A: We used different ways:
These are our methods to increase our results reliability.
Q: You mainly test the encoders, why don't you call your comparison "encoder comparison"?
A: We use developer's decoder if it is provided to us with encoder. It means that developers can increase their results using decoder optimization, postfiltering, etc.
In next comparison we are going to make additional decoder compatibility tests.
Q: What computers have you used for measurements?
A: You can find information about our computers' configurations on this page, or on the 7-th page of PDF document.
Q: Why did you use deinterlaced sequences?
A: It is our common policy. We chose our sources similar to sequences, which ordinary users use. It is very difficult for ordinary user to get progressive sequence nowadays. As a rule, users get sequences to compress from DVDs, satellite receivers, DV-cameras, etc. They capture that sequences in real time using popular simple embedded deinterlacing methods. Such methods along with compressing artifacts decrease encoding performance on those sequences. We think that popular codecs should take into consideration such features of sequences with the help of prefiltering and advanced motion compensation.
Q: Is it possible for developers to use information about your testing set to make better their results in your comparison?
A: Theoretically it is possible. That is why we each time replace two or three sequences with absolutely new ones, publishing them only after finish of all measurements. So, user can draw more attention on that sequences (we also track differences in results with big interest).
Q: You have used ATI graphics accelerator on your computers and ATI codec is fastest in your comparison. Don't you think it is strange?
A: We used the same computers as in previous comparison last year. No one knew about ATI codec at that moment.
Anyway, we also were very interested, if ATI codec had used any hardware acceleration. However, such acceleration does not conflict with testing rules (we compare codecs for ordinary PC machines).
We made additional tests at another computer with following configuration:
Measurement results ("Foreman" sequence):
Q: I think, you did not attempted visual comparison; there are only few frame pictures in your comparison!
A: We apply much attention to correct visual comparison. :)
Q: Why didn't you add codec X in your comparison?
A: For each codec we used presets, provided us by developers. So, only codecs, for which we were able to communicate with developers, were added to our comparison.
Q: Why did you measure High Profile separately?
A: This year, similar to previous comparison, we tested only Main Profile. More over, when we announced "Call for codecs", only one codec, according to our knowledge, could support High Profile good enough. However, when developers provided to us new version of their codecs, we discovered that at least three companies implemented High Profile.
Next year we are going to remove profile restrictions, but we will publish codec presets including used profile.
Q: Why did you use such strange rules for your informal comparison?
A: People already have sent to us a lot of suggestion to combine all results into one table. Actually, in the beginning we didn't want to make such table at all, because when one codec is best at low bitrates, another one could be better at high bitrates, third one at film material and fourth one at video conferencing. So, we can average out all results, but...
Nevertheless, we are going to improve informal comparison rues, so, you your suggestions are welcome.
Q: Why do codecs versions in your comparison are not the latest ones?
A: The thing is that comparison measurements take rather long time. But we didn't renew our codecs versions during the measurements (except critical bugs in codecs) to ensure fair conditions for all developers. So, some developers could release new version of their codec before comparison release.
In the future we are going to speed up our measurements using more productive work with developers and improving our measurement methods.
Q: Why didn't you use this type of diagrams?
A: We permanently increase both number of diagram types and number of graphs. There are seven diagram types in our last comparison. In future we are going to add some new diagram types and replace some of existing ones with improved versions. We have already started work in that direction.
Q: Why do your measurements take so much time?
A: There are number of factors, which increase measurement time:
So, if we decrease number of measured metrics and sequences in set, prohibit developers to fix bugs and remove report verification, we will speed up our comparison in few times.
Q: When will you make new comparison?
A: In September 2006, if we will not make two comparisons in year. :)
MSU video codecs comparisons resources:
Project updated by
Server Team and MSU Video Group
Project sponsored by YUVsoft Corp.
Project supported by MSU Graphics & Media Lab
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4