Assignments for INF5071 / Fall 2008

Project Details

This assignment is meant to consume roughly 30 hours in total per person, including make a presentation. It has three milestones: delivery of the project plan 26. September, presentation of the results 14. November, and the oral exam where the assignment is a minor topic. The project plan is not meant to be a formal project plan as taught in software engineering lecture. That alone would consume the entire allocated time. Instead, we expect an informal description that includes the following:
  • what you want to measure
  • how to perform the experiments (e.g., parameters to benchmark programs)
  • your schedule
  • how you want to present the results
Writing this requires that you have already downloaded and taken a look at the tools that you intend to use!

Topic suggestions

If none of these topics are of interest/are already taken, we encourage you to come with your own suggestions. The only mandate is that it be related to efficiency in some manner. Send in your suggestion and we will let you know if it is approved or not.

Compare p2p video-streaming applications

The p2p distribution model has shown to be cost effective and efficient for the distribution of data, for both private (cough) and public parties. Recently a number of p2p clients for live video-streaming have emerged, as such, it would be of interest to see how good these applications are at performing their job. Compare applications such as SwarmPlayer, ppLive and Veetle (or any other such client you might find interesting). If you want to install them (particularly ppLive), we strongly recommend to isolate them in a virtual machine that shares nothing with the host machine and that is deleted after the testing. But it would be highly interesting to see the traffic patters of the streams created by these application, captured by tcpdump and analysed with respect to the connections that are made. Dangerous! But interesting. We provide the machines, you handle installing and deleting Windows.


Comparison of audio-streaming applications

It is becoming increasingly common to utilize audio-streaming applications in conjunction with various online activities, such as gaming. In this project we encourage you to compare the latest releases of these applications with regard to throughput, latency, connection establishment time and any other factor you think is valuable (you can use tcpdump to capture this information). It would also be intersting to know what transport protocol is used primarily, and if any type of fall-back is used under certain situations. Applications of interest include: Mumble (an open-source client), TeamSpeak and Roger Wilco, we also encourage you to seek out other alternatives.


Analysis and comparison of game traffic

Install (if you have not already done so), analyse and compare the traffic generated by games like Age of Conan, EVE Online, Team Fortress 2, and any other MMOG, FPS or RTS of interest. This is accomplished by logging the network traffic from busy virtual locations inside the various games, and comparing the results. Which transport protocols are used? What bandwidth is consumed? Do you experience lag/delay, and do you see that in the logs? We provide the Linux machines for tcpdump, you provide the machine that runs the game (unless you can run it in VMWare).


Compare the overhead of virtual machines

Virtual machines have various uses. They provide protection from dangerous applications, they can allocate a limited set of resources to a program, or run several operating systems at the same time. But this comes at a performance price. Install two VMs (e.g. Xen and VMWare) and compare their raw overhead when they have all computer resources available for themselves for one CPU-bound and one IO-bound benchmark. We provide VMWare for Linux, Xen is free.


Compare Linux' alternative disk schedulers

Linux has several disk schedulers included in the 2.6 kernel, i.e., Linus elevator, Deadline I/O, Anticipatory I/O and complete fair scheduler (CFQ). In this assignment, the different schedulers should be evaluated where both request latency and throughput must be evaluated. This should be done using known benchmark programs like Bonnie and IOzone. Additionally, a single-process, multi-threaded benchmark should be implemented reading streams from disk (to /dev/null). Here, one should also simulate a time dependent stream and background load and look at deadline violations. Some information about the scheduler(s) can be found in /usr/src/Linux/Documentation/block/as-iosched.txt and /usr/src/linux/Documentation/block/deadline-iosched.txt