Total Pageviews

Search: This Blog, Linked From Here, The Web, My fav sites, My Blogroll

31 August 2009

Ubuntu 9.04(Jaunty Jackalope) gothca's


Microphone-skype how-to



hope this helps because it caused me a lot of headache

As far as I can tell the default settings under ubuntu pretty much mutes microphone.
This is how I fixed it, hope it works for anyone with the same problem too:

  1. Open up 'Volume Control' (for anyone new to ubuntu, if you click on the volume applet at the top right screen you can access it from there)
  2. Select|Control 'HDA Intel (ALSA mixer)' | 'HDA NVidia (ALSA mixer)' as the Device (depend on ypour hardware) and click on Preferences at the bottom of the same window
  3. Check the following tick boxes - 'Front Mic', 'Front Mic Boost', 'Capture', 'Capture 1' and last but not least the two 'Input Source' boxes
  4. You should then have two new tabs labelled options and recording. Select options where you can change the input source, change it to 'Front Mic'
  5. Then select the recording tab and raise the two capture slides and toggling record from capture and capture 1 to on
  6. You can then go back to the first(Playback) tab and play around with 'Front Mic' and 'Front Mic Boost' to change the recording quality. (I find that 'Front Mic Boost' just creates a lot of fuzzy noise and not a lot else so I turned that off completely.)
  7. Install a program called Audacity or the Sound Recorder
  8. Go to Applications>Sound & Video> Sound Recorder clik on record button (notice the level is changend continually while you speak). After clik on play button to hear the record. If you still have problems try going into Audacity and going to Edit --> Preferences and changing the recording and playback devices to ALSA: pulse
  9. Now you must get ready to roll but do one more control: open the Gnome Sound settings: System -> Preferences -> Sound and you should see this:screenshot-sound-preferences The way to test if your microphone is actually working is by clicking on the “Test” button behind “Sound capture”. Press test and talk in your microphone. If you hear yourself with a slight delay from the speakers you have it working. If not try every option in the list (in the screenshot it’s on “ALSa – Advanced Linux Sound Architecture” between the “Sound capture:” and “test”. one of them should be working. Also note that there seems to be double values with exactly the same name but they do act differently! at least that was the case for me. Once you’ve done that and you found one that seems to be working (the right one for me was: the default “ALSA”) then you can start recording your microphone if you want. Try it out in gnome-sound-recorder(alias Sound Recorder).
Okay, by now your microphone should be working and you can record your microphone input and it seems to be working fine. If that’s the case then you now install skype (the latest x64 version or better the static(non static-oss) version from Medibuntu repos). Once installed open skype and go to the “Sound Devices” tab in skype and change the 'Sound in' item to "HDA NVidia etc"(in my case, in yours probably HDA [INTEL | else] ) .

That's all folks._

26 August 2009

Ubuntu & X Windows System(X.Org) - X11R6


1. Ubuntu Linux Secrets
2009 By Richard Blum
ISBN: 978-0-470-39508-0

2. Ubuntu Unleashed

2006 By Andrew Hudson, Paul Hudson
ISBN-10: 0-672-32909-3

3. Ubuntu Hacks
2006 By Bill Childers, Jonathan Oxer, Kyle Ranki
Print ISBN-10: 0-596-52720-9

The X Windows System


The X environment is unique from the known Windows operating systems in that X is actually a server that provides graphical displays across platforms, even across networks. This makes the X environment very powerful because using a client/server model allows for platform independence and network transportability. This client/server approach is a little different from the commonly known Windows environment; Basically, the X server portion provides the necessary software to control the graphical and input hardware. The client application then tells the server what to display. The underlying engine of X11 is the X protocol, which provides a system of managing displays on local and remote desktops. The protocol uses a client/server model that allows an abstraction of the drawing of client windows and other decorations locally and over a network. An X server draws client windows, dialog boxes, and buttons that are specific to the local hardware and in response to client requests. The client, however, does not have to be specific to the local hardware. This means that system administrators can set up a network with a large server and clients and enable users to view and use those clients on workstations with totally different CPUs and graphics displays.

The X client does nothing to directly display the information, so a standard must be set. X defines that standard so that any X client can communicate with any X server by giving it certain display commands. The X server does the actual work of displaying the information. In this way, a client can display its information on any other platform. The only thing that other platform needs is an X server. Using this client/server model lets the actual client application be platform-independent. This means that the client application can display itself on any platform architecture for which an X server is available. For instance, in a mixed environment where you have Linux running on Intel-based PC, Mac, and SPARC platforms, a client from the Intel-based PC can run on either the Mac or the SPARC workstation. The reverse is also true; the Intel-based platform can just as easily display applications from the other platforms. In the previous scenario, a network links these different platforms together. As long as you have two or more computers connected to a network, they can share applications. Granted you have some security issues to consider, but the basic principle remains—the application runs as if it were local to the workstation. All in all, this type of structure allows for an enormous amount of flexibility when creating applications. Although the X server sets the standard for displaying information, it does not specify a policy for interacting with the user; that is the job of other components that make up the GUI: the window manager and the desktop environment.
If you have an older, slower system with limited resources, then you might want to consider not using a GUI because it can drastically slow down your performance. Also, if you use the system as a server, you can leave more room for the other server applications., there (even for security reasons) is no real need to have a GUI installed.
To help determine the load of a window manager on your system, use a performance meter such as xload in the x11-apps Ubuntu package to gather resource information for comparing them. Most window managers include some type of performance meter. Because the meter itself consumes resources, you can’t take it as gospel as to the resources used by the interface. However, it can give you a point of reference to compare different resources.
Because X offers users a form of distributed processing, this means that Ubuntu can be used as a very cheap desktop platform for clients that connect to a powerful X server. The more powerful the X server, the larger the number of X-based clients that can be accommodated. This functionality can breathe new life into older hardware, pushing most of the graphical processing on to the server. A fast network is a must if you intend to run many X clients because X can become bandwidth-hungry. X is hugely popular in the UNIX and Linux world for a variety of reasons. The fact that it supports nearly every hardware graphics system is a strong point, as well as strong multiplatform programming standards give it a solid foundation of developers committed to X. Another key benefit of X is its networking capability, which plays a central point in administration of many desktops and can also assist in the deployment of a thin-client computing environment. Being able to launch applications on remote desktops and also standardize installations serve to highlight the versatility of this powerful application. More recent versions of X have also included support for shaped windows (that is, non-rectangular), graphical login managers (also known as display managers), and compressed fonts. Each release of X brings more features designed to enhance the user experience, including being able to customize how X client applications appear, right down to buttons and windows. Having applications launch from a single location makes the lives of system administrators a lot easier because they have to work on only one machine, rather than several.

X in Ubuntu


If you run the Ubuntu server, or if you run your Ubuntu workstation in text mode, there’s not much involved for Ubuntu to interact with the video card and monitor. By default Ubuntu can use just about any video card and monitor in text mode to display 25 lines of 80-column text. This feature is built into the Ubuntu Linux software so that it can directly send text to the monitor at all times. However, when you use the graphical mode on your workstation, things are a bit different. Instead of directly sending text to the monitor, Ubuntu must be able to draw lines, shade colors, and manipulate images. To do that, Ubuntu makes use of a special type of
software called X Windows to interface with the video card and monitor. Two basic elements control the video environment on your workstation:
  • The PC video card
  • The monitor
The Ubuntu operating system must interact with the video card in your PC to produce the graphical images for your desktop to appear and to run graphical applications. The video card controls how to draw the images on the monitor display, what colors are available to use, what size of display area you can use, and at what speed the system can draw the images. The video card must be able to interact with the monitor to display the images sent by Ubuntu. There’s wide choice of monitors available these days, with a wide variety of features, from standard old-style, cathod-tube monitors to modern flat-screen plasma monitors. The combination of the video card features and monitor features determines the graphics capabilities of your workstation. Ubuntu needs to know how to use and exploit these features to produce the best possible graphics for the desktop and applications. Given the wide variety of video cards and monitors available, it would be difficult for the GNOME desktop developers to have to code the features found in GNOME for every possible video card and monitor environment available. Instead, the X Windows software helps do that.

The X Windows software operates as an intermediary between the Ubuntu system and the input and output devices connected to the workstation. It’s responsible for controlling the graphical environment so that GNOME doesn’t have to support different types of video cards and monitors. Instead, the X Windows software handles all of that, and the GNOME software has to interact with just the X Windows software to display images on any type of video card and monitor combination. Besides dealing with the video card and monitor, X Windows also handles any input devices attached to the workstation, such as the keyboard and mouse. It’s the main clearinghouse for all interaction for the desktop environment. Figure 16-1 shows a typical X
Windows system.

Because the X Windows software handles all of the input and output functions for the Ubuntu workstation, it’s important to ensure that the X Windows software is working properly. It must know the type of input and output devices it’s communicating with so that you can interact with your desktop. The X Windows software is actually a specification of how to interact in a client /server methodology, serving the input and output devices to Ubuntu applications. Two popular X Windows implementations are currently available in the Linux world.


Linux X Windows Software
Over the years, two X Windows software packages have emerged in the Linux world:
  • XFree86 For a long time, the XFree86 software package was the only X Windows package available for Linux. As its name implies, it’s a free, open-source version of the X Windows software intended for the x86 computer platform. Unfortunately, XFree86 is notorious for being extremely hard to configure and get working properly. It uses a cryptic configuration file to define the input and output device settings on the system, which is often confusing to follow. Having the wrong values set for a device could render your workstation useless! However, because XFree86 was once the only way to produce graphical windows on Linux PCs, it was necessary to learn how to use it. As time progressed, several attempts to automate the XFree86 configuration were made. Many Linux distributions used a user -interactive method of automatically generating the XFree86 configuration file. Several dialog boxes would appear during installation, prompting the installer to select the video card and monitor setup from a list. The responses were then used to generate a configuration file. There were also attempts at trying to automatically detect video card, monitor, keyboard, and mouse settings. Some of these attempts were better than others. These efforts, though, did eventually lead to another X Windows software package.
  • X.Org More recently, a package called X.Org has come onto the Linux scene. It too provides an open-source software implementation of the X Windows system, but in a much more user-friendly way. It uses a combination of scripts and utilities to attempt to automatically detect the core input and output devices on a workstation, then creates the configuration file based on its findings. X.Org is becoming increasingly popular, and many Linux distributions are starting to use it instead of the older XFree86 system. Ubuntu uses the X.Org package to produce the graphical X Windows you see for your desktop. When you install Ubuntu, it goes through a series of steps to detect the input and output devices on your workstation (see Chapter 3, “Installing Ubuntu”). During the installation you may notice a time when it scans your video card and monitor for supported video modes. Sometimes this causes your monitor to go blank for a few seconds. Because there are many types of video cards and monitors out there, this process can take a little while to complete. Unfortunately, sometimes Ubuntu can’t autodetect what video settings to use, especially
    with some of the newer, more complicated video cards. If this happens, Ubuntu reverts to
    a default, safe X.Org configuration. The safe configuration assumes a generic video card and monitor and usually will produce a graphical desktop, although not at the highest resolution possible on your system. If this happens in your installation, don’t worry. Usually you can use the System>Preferences>Screen Resolution(now on Display) utility to set the proper video mode for your setup. If all else fails, you can manually enter the settings in the X.Org configuration file.
X11R7 is the X server that is used with Ubuntu. The base Xorg distribution consists of 30 RPM packages (almost 120MB), which contain the server, along with support and development libraries, fonts, various clients, and documentation. An additional 1,000 or more X clients, fonts, and documentation are also included with Ubuntu.
Note :A full installation of X and related X11R7 files can consume more usually much more than 170MB of hard drive space. This is because additional clients, configuration files, and graphics (such as icons) are under the /usr/bin and /usr/share directory trees. You can pare excessive disk requirements by judiciously choosing which X-related packages (such as games) to install on workstations. However today, the size requirements are rarely a problem, except in configuring thin-client desktops or embedded systems.
The /usr directory and its subdirectories contain the majority of Xorg's software. Some important subdirectories are:
  • /usr/X11R6 contains a link to /usr/bin who'is the location of the X server and various X clients. (Note that not all X clients require active X sessions.)
  • /usr/include This is the path to the files necessary for developing X clients and graphics such as icons.
  • /usr/lib This directory contains required software libraries to support the X server and clients.
  • /usr/lib/X11 This directory contains fonts, default client resources, system resources, documentation, and other files that are used during X sessions and for various X clients. You will also find a symbolic link to this directory, named X11, under the /usr/lib directory.
  • /usr/lib/xorg/modules This path links to drivers and the X server modules used by the X server enables use of various graphics cards.
  • /usr/X11/man This directory contains directories of man pages for X11 programming and clients
The main components required for an active local X session is installed on your system if you choose to use a graphical desktop. These components are the X server, miscellaneous fonts, a terminal client (that is, a program that provides access to a shell prompt), and a client known as a window manager. Window managers administer on screen displays, including overlapping and tiling windows, command buttons, title bars, and other on screen decorations and features.

The X.Org Configuration
(xorg.conf)

The core of the X.Org configuration is the xorg.conf configuration file, located in the /etc/X11 folder. This configuration file contains all of the settings detected by X.Org when you installed Ubuntu. Should you need to change resolution or refresh frequency post-install, you should use the gnome-display-properties application. Information relating to hardware, monitors, graphics cards, and input devices is stored in the xorg.conf file, so be careful if you decide to tinker with it in a text editor!

The xorg.conf configuration file contains several sections, each defining a different element of the input and output system. Each section itself may contain one or more subsections that further define the input or output device. The basic format of a section looks like this:

# Comment for a section
Section “Name”
EntryName EntryValue


Subsection “Subname”
EntryName EntryValue


EndSubsection
EndSection

The section and subsection areas consist of a name/value pair that defines a setting for the device, such as the type of mouse or the available viewing modes of a monitor.


Defining Sections
The components, or sections, of the xorg.conf file specify the X session or server layout, along with pathnames for files that are used by the server, any options relating directly to the server, any optional support modules needed, information relating to the mouse and keyboard attached to the system, the graphics card installed, the monitor in use, and of course the resolution and color depth that Ubuntu uses. (the xorg.conf man page contains full documentation of all the options and other keywords you can use to customize your desktop settings.) Of the 12 sections of the file, these Nine are the essential components:
  • Device: Describes the characteristics of one or more graphics cards and specifies what optional (if any) features to enable or disable.
  • DRI: Includes information about the Direct Rendering Infrastructure, which contains the hardware acceleration features found in many video cards.
  • Files: Lists pathnames of font files along with the file containing the color database used for the display, the location fonts, or port number of the font server.
  • InputDevice: Lists information about the keyboard and mouse or other pointing devices such as trackballs, touchpads, or tablets.; multiple devices can be used.
  • Module: Defines X server extension modules and font modules to load.
  • Monitor: Lists the monitor specifications(capabilities of any attached display); multiple monitors can be used.
  • ServerFlags: Lists X server options for controlling features of the X Windows environment.
  • ServerLayout: Combines one or more InputDevice and Screen sections to create a layout for an X Windows environment. Defines the display, defines one or more screen layouts, and names input devices.
  • Screen: Defines a video card and monitor combination used by the X server. Defines one or more resolutions, color depths, perhaps a default color depth, and other settings.
The sections appear on an as-needed basis. That is, you’ll only see the sections defined in the X.Org configuration file that are actually used to describe devices on your workstation. Thus, if you don’t have any special font or color files defined, you won’t see a Files section in the configuration file on your Ubuntu workstation.


Example Configuration
To demonstrate the X.Org configuration file layout, let’s take a look at an example configuration
file from an Ubuntu workstation.

Section “InputDevice”
Identifier “Generic Keyboard”
Driver “kbd”
Option “XkbRules” “xorg”
Option “XkbModel” “pc105”
Option “XkbLayout” “us”
EndSection

Section “InputDevice”
Identifier “Configured Mouse”
Driver “mouse”
Option “CorePointer”
EndSection

Section “InputDevice”
Identifier “Synaptics Touchpad”
Driver “synaptics”
Option “SendCoreEvents” “true”
Option “Device” “/dev/psaux”
Option “Protocol” “auto-dev”
Option “HorizEdgeScroll” “0”
EndSection

Section “Device”
Identifier “Configured Video Device”
EndSection

Section “Monitor”
Identifier “Configured Monitor”
EndSection

Section “Screen”
Identifier “Default Screen”
Monitor “Configured Monitor”
Device “Configured Video Device”
EndSection

Section “ServerLayout”
Identifier “Default Layout”
Screen “Default Screen”
InputDevice “Synaptics Touchpad”
EndSection

This configuration file defines several sections for input and output devices.
  • The first section defines an InputDevice—specifically, a standard U.S. 105-key keyboard as the input device. The Identifier entry for the device, Generic Keyboard, declares a name for the device. The driver that X.Org uses to manage the device is defined using the Driver entry. After that, the section includes a few options that define specific characteristics for the keyboard device.
  • The second section also defines an InputDevice, but in this instance it defines a standard mouse, using a standard mouse driver and no additional options.
  • In the next section, you see yet another definition for an InputDevice, but this one defines a touchpad mouse used on a laptop. The touchpad uses a generic Synaptics driver to interact with the touchpad and defines a few options to control how the touchpad operates.
  • You can configure multiple devices, and there might be multiple InputDevice sections. After the three InputDevice sections, the next three sections(Device, Monitor, Screen) define the video environment for the workstation.You may notice something odd about the device and monitor sections that are defined in the configuration file. The configuration file doesn’t contain any drivers or settings for the video card device or the monitor. This X.Org feature is relatively new. When a device appears in the configuration file without any settings, it forces the X.Org software to automatically attempt to detect the device each time you start a new X Windows session. Ubuntu started using this method in version 8.04 to help facilitate adding new video card and monitor features after installation. By automatically detecting the video environment each time the system starts, Ubuntu can detect when you install new hardware. The time necessary to autodetect the new hardware isn’t very significant, so the performance penalty of redetecting hardware is small, relative to the benefit of automatically detecting new hardware. The Screen section in the configuration file ties the monitor and video card together into a single device for X.Org. Using this configuration, X.Org knows which video card and monitor are paired. Although this feature is somewhat trivial in a single monitor situation, if you have dual monitors and dual video cards, it’s a must.
    If X.Org is unable to detect your video card and monitor (or incorrectly detects them), you can manually enter the settings in the xorg.conf file. When X.Org detects manual settings for a device, it doesn’t attempt to automatically detect the device; it uses the predefined values instead. For video cards, you’ll need to enter the name of the video card driver used to control the video card, plus any additional required options to define the video card settings. Here’s an example of a manual video card entry (The same applies to monitors.If you’re using a special video card or monitor, the manufacturer often will provide the necessary X Windows driver and configuration settings required for it ) in the xorg.conf configuration file. Once you’ve defined the video card and monitor, you must define a Screen section to link the two devices, plus define features for the screen :

Section “Device”
Identifier “Videocard0”
Driver “nv”
VendorName “Videocard vendor”
BoardName “nVidia GeForce 2 Go”
EndSection
Section “Monitor”
Identifier “Monitor0”
Vendorname “Monitor Vendor”
ModelName “HP G72”
DisplaySize 320 240
HorizSync 30.0 - 85.0
VertRefresh 50.0 - 160.0
Option “dpms”
EndSection
Section “Screen”
Identifier “Screen0”
Device “Videocard0”
Monitor “Monitor0”
DefaultDepth 24

Subsection “Display”
Viewport 0 0
Depth 16
Modes “1024x768” “800x600” “640x480”
EndSubSection

SubSection "Display"
Depth 8
Modes "1024x768” “800x600” “640x480”
EndSubSection

EndSection
  • The Monitor Section: The Monitor section configures the designated display device as declared in the ServerLayout section. Note that the X server automatically determines the best video timings according to the horizontal and vertical sync and refresh values in this section. If required, old-style modeline entries (used by distributions and servers prior to XFree86 4.0) might still be used. If the monitor is automatically detected when you configure X, its definition and capabilities are inserted in your xorg.conf file from the MonitorsDB database. This database contains more than 600 monitors and is located in the /usr/share/hwdata directory.
  • The Device Section The Device section provides details about the video graphics chipset used by the computer. The Driver entry tells the Xorg server to load the nv_drv.o module from the /usr/lib/xorg/modules/drivers directory. Different chipsets have different options.The Xorg server supports hundreds of different video chipsets. If you configure X11 but subsequently change the installed video card, you need to edit the existing Device section or generate a new xorg.conf file, using one of the X configuration tools, to reflect the new card's capabilities. You can find details about options for some chipsets in a companion man page or in a README file under the /usr/lib/X11/doc directory. You should look at these sources for hints about optimizations and troubleshooting. However, this should be fairly rare as Ubuntu sports a comprehensive hardware detection system, automatically adjusting settings to take account of newly installed hardware.
  • The Screen Section: The Screen section ties together the information from the previous sections (using the Screen0, Device, and Monitor Identifier entries). It can also specify one or more color depths and resolutions for the session. In this example a color depth of millions of colors and a resolution of 1024x768 is the default, with optional resolutions of 800x600, and 640x480. Multiple Display subsection entries with different color depths and resolutions (with settings such as Depth 16 for thousands of colors) can be used if supported by the graphics card and monitor combination. You can also use a DefaultDepth entry (which is 24, or millions of colors, in the example), along with a specific color depth to standardize display depths in installations. You can also specify a desktop resolution larger than that supported by the hardware in your monitor or notebook display. This setting is known as a virtual resolution in the Display subsection. This allows, for example, an 800x600 display to pan (that is, slide around inside) a virtual window of 1024x768.
  • Note : If your monitor and graphics card support multiple resolutions and the settings are properly configured, you can use the key combination of Ctrl+Alt+Keypad+ or Ctrl+Alt+Keypad- to change resolutions on the fly during your X session.
In particular iff your computer is a laptop with a touchpad, it may have several InputDevice entries, so make sure you find the one that refers to your mouse. If it was configured automatically by Xorg, it will probably look something like this:

Section "InputDevice" 
Identifier "Configured Mouse"
Driver "mouse"
Option "CorePointer"
Option "Device" "/dev/input/mice"
Option "Protocol" "ImPS/2"
Option "ZAxisMapping" "4 5"
Option "Emulate3Buttons" "true"
EndSection


Start by changing the Protocol value. The ExplorerPS/2 driver supports more devices than the older ImPS/2 driver, so substitute this line:

Option        "Protocol" "ExplorerPS/2"

Since your multi-button mouse almost certainly has a middle button, you don't need the Emulate3Buttons option anymore, so delete it. Unfortunately, there is no way for your computer to automatically determine the number of buttons available on a mouse, so you need to add an option that explicitly tells Xorg how many it has. It's obvious that you need to count all the actual physical buttons on the mouse, but remember that you usually need to add three more: one for clicking the scroll wheel, one for scroll-up, and one for scroll-down. A typical scroll mouse with two main buttons on the top, two on the side, and a scroll wheel actually has seven buttons as far as the driver is concerned, so add a line like this:
Option        "Buttons" "7"

Next, map the action of the scroll wheel to virtual buttons using the ZAxisMapping option. In the case of a simple scroll wheel that moves only up or down, you can assign two values, which are associated with negative (down) and positive (up) motion, respectively:
Option        "ZAxisMapping" "4 5"

Some mice have a scroll wheel that also rocks from side to side, and some even have two scroll wheels, in which case you can add mappings for negative and positive motion in the second axis:

Option        "ZAxisMapping" "4 5 6 7"

Unfortunately, you may find the second scroll direction isn't recognized by Xorg at all because there is currently no standard for defining how mice should encode horizontal scroll data when it's transmitted to the driver. Even if it is recognized, you may find the rocker or second scroll wheel moves in the opposite direction to what you expect, so you may need to reverse the third and fourth values.

Some browsers such as Firefox are hardcoded to use buttons 4 and 5 as shortcuts for "back" and "forward," but because some wheel mice report wheel-up and wheel-down as the fourth and fifth button events, respectively, you may need to do some extra work to use the side buttons as back and forward. You can remap the reported button events by calling xmodmap:

$xmodmap -e "pointer = 1 2 3 6 7 4 5"

The xmodmap command will need to be run each time you log in to GNOME, so go to System>Preferences>Sessions>Startup Programs and put in the whole line; then compensate for the offset button values by using a modified ZAxisMapping line in /etc/X11/xorg.conf:
Option       "ZAxisMapping" "6 7"

One final option you can configure is mouse resolution. Many multi-button gaming mice run at very high resolutions to enable accurate targeting, but you may find that it throws off Xorg's response curve. In that case, it may help to add a Resolution option in dpi (dots per inch):
Option       "Resolution" "2400"

Once you have made all of these changes, your mouse configuration will look something like this:

Section "InputDevice" 
Identifier "Configured Mouse"
Driver "mouse"
Option "CorePointer"
Option "Device" "/dev/input/mice"
Option "Protocol" "ExplorerPS/2"
Option "Buttons" "7"
Option "ZAxisMapping" "4 5"
EndSection

To apply changes, you need to restart Xorg. The easiest way to do so is to log out of your current session and then press Ctrl-Alt-Backspace, which will kill Xorg and force GDM to restart it (if GDM doesn't restart it, log in at the console and run the command):

$sudo /etc/init.d/gdm restart

Log back in and launch xev, the X EVent reporter, and click each button and scroll the scroll wheel in both directions. Each event will cause a button number to be reported in the terminal. If everything went as planned, each button will report a different button number.



Configuring X
Although the Ubuntu installer can be relied upon to configure X during installation, problems can arise if the PC's video card is not recognized. If you do not get the graphical login that should come up when you reboot after installation, then you will have to do some configuration by hand in order to get X working.
Note that some installs, such as for servers, do not require that X be configured for use to support active X sessions, but might require installation of X and related software to support remote users and clients.
You can use the following configuration tools, among others, to create a working xorg.conf file:
  • $ sudo dpkgreconfigure xserver-xorg 
    This is Ubuntu's text-based configuration tool, which guides you through creating an xorg.conf file. You can use the dpkgreconfigure client to create or update an xorg.conf file. The beauty of this tool is that it is command line only, so it can be used if you have problems with your Xorg server. The command bring up the configuration dialog for configuring X. Your best bet is to try the autodetect before heading on to manually configure the X server. Nine times out of ten Ubuntu gets it right, but if you need to manually configure X then make sure you have all the necessary details such as:

    • Graphics card maker and chipset (e.g. ATI 9600 ([r350])
    • Amount of memory on your graphics card
    • Refresh rates (both horizontal and vertical) for your monitor
    • Supported screen resolutions for your monitor
    • Type of keyboard and mouse that you are using

    If you have all of this information available then you will have no problem configuring X.
  • Xorg :The X server itself can create a skeletal working configuration.You can create the xorg.conf file manually by typing one from scratch using a text editor, but you can also create one automatically by using the Xorg server or configuration utilities. As the root operator, you can use the following on the server to create a test configuration file:

    # X -configure

    After you press Enter, a file named xorg.conf .new is created in root's home directory (/root). You can then use this file for a test session, like this:

    # X config /root/xorg.conf.new
The following sections discuss how to use each of these software tools to create a working xorg.conf file.


Starting X
You can start X sessions in a variety of ways.

  • The Ubuntu installer sets up the system initialization table /etc/inittab to have Linux boot directly to an X session using a display manager (that is, an X client that provides a graphical login). After you log in, you use:
  1. a local session (running on your computer)
  2. or, if the system is properly configured, an X session running on a remote computer on the network.
Logging in via a display manager requires you to enter a username and password.

  • You can also start X sessions from the command line. The following sections describe these two methods.


Using a Display Manager

An X display manager presents a graphical login that requires a username and password to be entered before access is granted to the X desktop. It also allows you to choose a different desktop for your X session. Whether or not an X display manager is presented after you boot Linux is controlled by a runlevel a system state entry in /etc/inittab. The following runlevels are defined in the file:

# Runlevel 0 is halt.
# Runlevel 1 is single-user.
# Runlevels 2-5 are multi-user.
# Runlevel 6 is reboot.

Runlevels 2-5 are used for multiuser mode with a graphical X login via a display manager; booting to runlevel 1 provides a single-user, or text-based, login. The init default setting in the /etc/inittab file determines the default runlevel:

id:2:initdefault:

In this example, Linux boots and then runs X.


Configuring gdm
The gdm display manager is part of the GNOME library and client distribution included with Ubuntu and provides a graphical login when a system boots directly to X. Its login (which is actually displayed by the gdmlogin client) hosts pop-up menus of window managers, languages, and system options for shutting down (halting) or rebooting the workstation. Although you can edit (as root) gdm.conf under the /etc/gdm directory to configure gdm, a much better way to configure GNOME's display manager is to use the gdmsetup client. You can use the gdmsetup client to configure many aspects and features of the login display. You launch this client from the GNOME desktop panel's System> Administration>Login Window menu item, or from the command line, like this:

$ gksudo gdmsetup &


After you press Enter, you see the GDM Setup window, as shown in Figure above. You can specify settings for security, remote network logins, the X server, and session and session chooser setup by clicking on the tabs in the GDM Setup dialog.

Configuring kdm
The kdm client, which is part of the KDE desktop suite, offers a graphical login similar to gdm. You configure kdm by running the KDE Control Center client (kcontrol), as the root operator, by clicking the Control Center menu item from the KDE kicker or desktop panel menu. You can also start KDE Control Center by using the kcontrol client at the command line like so:

$ kcontrol &

In the Index tab of the left pane of the KDE Control Center window, you click the System Administration menu item to open its contents, and then you click the Login Manager menu item. The right pane of the Control Center window displays the tabs and configuration options for the kdm Login Manager. To make any changes to the KDE display manager while logged in as a regular user, you must first click the Administrator Mode button, and then enter the root operator password. You can click on a tab in the Control Center dialog to set configuration options. Options in these tabs allow you to control the login display, prompts, user icons, session management, and configuration of system options (for shutting down or rebooting). After you make your configuration choices in each tab, click the Apply button to apply the changes immediately; otherwise, the changes are applied when the X server restarts.

Using the xdm Display Manager
The xdm display manager is part of the Xorg distribution and offers a bare-bones login for using X. Although it is possible to configure xdm by editing various files under the /etc/X11/xdm directory, GNOME and KDE offer a greater variety of options in display manager settings. The default xdm login screen's display is handled by the xsetroot client, which is included with Xorg, and Owen Taylor's xsri client, as specified in the file Xsetup_0 in the xdm directory under /etc/X11. The xsri client can be used to set the background color of the login display's desktop and to place an image in the initial display.


Starting X from the Console by Using startx
If you have Ubuntu set to boot to runlevel 1, a text-based console login, you can start an X session from the command line. You use the startx command (which is actually a shell script) to do so. You launch the X server and an X session by using startx, like this:

$ startx

startx first looks in your home directory for a file named .xinitrc. This file can contain settings that will launch an alternative desktop and X clients for your X session. The default system .xinitrc is found in the /etc/X11/xinit directory, but a local file can be used instead to customize an X session and launch default clients. For example, you can download and install the mlvwm window manager, which is available from in the /usr/local/bin directory. You can then use the mlvwm desktop for your X session along the xterm terminal client by creating an .xinitrc file that contains the following:

xterm & exec /usr/bin/mlvwm

Using a custom .xinitrc is not necessary if you're using Ubuntu's desktop, which runs X and either a GNOME-aware window manager or KDE as a desktop environment.

You can also use the startx command with one or more command-line options. These options are passed to the X server before it launches an X session. For example, you can use startx to specify a color depth for an X session by using the -depth option, followed by a number such as 8, 16, 24, or 32 for 256, thousands, or millions of colors (as defined in the X configuration file and if supported). Using different color depths can be useful during development for testing how X clients look on different displays, or to conserve use of video memory, such as when trying to get the highest resolution (increased color depth can sometimes affect the maximum resolution of older video cards). For example, to start a session with thousands of colors, you use the startx command like this:

$ startx -- -depth 16$

Another option that can be passed is a specific dots-per-inch (dpi) resolution that is to be used for the X session. For example, to use 100 dpi, you use the -dpi option followed by 100, like this:

$ startx -- -dpi 100

You can also use startx to launch multiple X sessions. This feature is due to Linux support for virtual consoles, or multiple text-based displays. To start the first X session, you use the startx command followed by a display number, or an X server instance (the first is 0, using screen 0) and a number that represents a virtual console. The default console used for X is number 7, so you can start the session like this:

$ startx -- :0 vt7

After X starts and the window manager appears, you press Ctrl+Alt+F2 and then log in again at the prompt. Next, you start another X session like this, specifying a different display number and virtual console:

$ startx -- :1 vt8

Another X session starts. To jump to the first X session, press Ctrl+Alt+F7. You use Ctrl+Alt+F8 to return to the second session. If you exit the current session and go to another text-based login or shell, you use Alt+F7 or Alt+F8 to jump to the desired session.

Using startx is a flexible way to launch X sessions, but multiple sessions can be confusing, especially to new users, and are a horrific resource drain on a system that does not have enough CPU horsepower and memory. A better approach is to use multiple workspaces, also known as virtual desktops.


Selecting and Using Window Managers

A window manager is usually launched immediately after the X server starts. The window manager looks after the general look and feel of the interface, as well as the actual drawing of scrollbars, buttons, and so on. A window manager is essential to interact with the X server; without one, X client windows would not be able to be moved around or resized. Linux allows for a wide variety of window managers, and each window manager caters for specific requirements. This variety is one of the features that makes Linux and X itself more popular.

A window manager provides the user with a graphical interface to X, as well as a customized desktop which includes the look and feel of the window manager.
  • Things such as icons, panels, windows, grab handles, and scroll bars are defined by the window manager's general settings and are usually unique to that window manager.
  • A window manger might also provide menuing on the root desktop or after a button is clicked in a client's window title bar.
  • Some window managers support the use of special keyboard keys to move the pointer and emulate mouse button clicks.
  • Another feature is the capability to provide multiple workspaces, or a virtual desktop, which is not the same as the virtual screen; whereas a virtual screen is a desktop that is larger than the display, a virtual desktop offers two, four, or eight additional complete workspaces. Switching between these window managers is fairly simple.
    Before you login, click the Options button on the login screen and choose Select Session. You will then be given a list of the installed window managers that are ready for use. You can choose to change your default session to another window manager, or just use it for one session only. Do not worry about losing your favorite window manager to another one. Just change it back again when you next return to the login screen.

The GNOME and KDE Desktop Environments
A desktop environment for X provides one or more window managers and a suite of clients that conform to a standard graphical interface based on a common set of software libraries. When they are used to develop associated clients, these libraries provide graphical consistency for the client windows, menus, buttons, and other onscreen components, along with some common keyboard controls and client dialogs. The following sections briefly discuss the two desktop environments that are included with Ubuntu: GNOME and KDE.

GNOME: The GNU Network Object Model Environment
The GNOME project, which was started in 1997, is the brainchild of programmer whiz Miguel de Icaza. GNOME provides a complete set of software libraries and clients. GNOME depends on a window manager that is GNOME-aware. This means that to provide a graphical desktop with GNOME elements, the window manager must be written to recognize and use GNOME. Some compliant window managers that are GNOME-aware include Havoc Pennington's metacity (the default GNOME window manager), Enlightenment, Window Maker, IceWM, and sawfish.

Ubuntu uses GNOME's user-friendly suite of clients to provide a consistent and user-friendly desktop. GNOME clients are found under the /usr/bin directory, and GNOME configuration files are stored under the /etc/gnome and /usr/share/gnome directories, with user settings stored in the home directory under .gnome.

gconfeditor client
used for setting GNOME configuration options. You can configure your desktop in various ways and by using different menu items under the Preferences menu. For a comprehensive icon view of preference items, look under the System menu to find many different tools. Nautilus, the GNOME file browser, was originally developed by Eazel (which ceased operations shortly before summer 2001). The Nautilus shell is used for the Ubuntu desktop as a file browser and desktop utility. The Nautilus main window is a hierarchy of sub-directories and files in a home directory.

KDE: The K Desktop Environment
KDE, which is included with Kubuntu, has been available for Linux, Xorg, and XFree86 since 1996. KDE is a graphical desktop environment that offers a huge suite of clients, including a free office suite named KOffice. KDE clients are located under the /usr/bin directory, and nearly all clients have a name that begins with k.
The .kde directory in your home directory contains custom settings and session information. You can use KDE's Control Center to customize desktop settings. You can launch this client by clicking the Control Center menu item from KDE's desktop menu (hosted by the panel along the bottom of your desktop, known as the kicker) or from the command line, like so:

$ kcontrol &


Xfce: The Lightweight Alternative

With the release of the Ubuntu 6.06 family of distros came another sibling, Xubuntu. This version is aimed specifically at low specification machines, and uses a very lightweight window manager called Xfce. Xfce can make use of most of the GNOME applications, thanks to being able to access the GTK2 libraries that GNOME is built upon. If you like your window manager simple and un-complicated, and GNOME is straining your hardware, then choose XFce.


Ubuntu Video Configuration
After you’ve installed Ubuntu, you can perform a few manual changes to the X Windows system using graphical tools available on the desktop.

The Screen Resolution Utility
The X.Org environment in Ubuntu is rapidly developing and changing. Further advances and ideas are implemented in each new Ubuntu distribution. Currently, the core utility for configuring your video settings in the Ubuntu desktop is the Display utility (System ➪
Preferences ➪ Display).
In my case appear a dialog box
It appears that your graphics driver does not support the necessary extensions to use this tool. Do you want to use your graphics driver vendor's tool instead?
  1. If i clik No then The Display dialog box is pretty basic. There are only a few things you can modify here:
  • Resolution: Select the screen resolution from a list of supported resolutions for your video card and monitor combination. X.Org automatically detects resolutions that are supported and displays only those resolutions.
  • Refresh Rate: Select the screen refresh rate for your monitor.
  • Rotation: Set the screen orientation for the monitor. The options are
  • Normal: Display the desktop at the normal orientation for the monitor.
  • Left: Display the desktop using the left side of the monitor as the top of the desktop.
  • Right: Display the desktop using the right side of the monitor as the top of the desktop.
  • Upside Down: Display the desktop using the bottom of the monitor as the top of the desktop.
  • Mirror Screens: Create identical desktops on dual monitor setups instead of expanding the desktop to both monitors.
  • Detect Displays: Re-scan the video cards and monitors for the workstation.
The Mirror Screens option determines how X.Org handles two or more monitors connected to the workstation. When you select the Mirror Screens check box, X.Org duplicates the desktop on both monitors. However, when you deselect the check box, X.Org separates the two monitors and distributes the desktop layout between them. When you use this feature, more screen areas appear on the Screen Resolution window area, one box for each monitor connected to the workstation.You can drag and drop the different monitors in the window. The location of the monitor determines which part of the expanded desktop it displays. If you set the monitor images side by side, the desktop will expand sideways between the monitors. If you set the monitor images one on top of the other, the desktop will expand vertically between the monitors.
Each monitor image has its own group of settings. Click on a monitor image to view the settings for that monitor. By default, X.Org will set the display resolution of the two monitors to their highest common value.
If you plug a second monitor into a laptop to use as a cloned monitor, make sure that the additional video port on the laptop is enabled in the system BIOS settings. Some laptops disable external video ports when not being used.

Setting Compiz Fusion Features
If your workstation contains an advanced video card, besides the basic video settings available
in the Screen Resolution utility, you can enable advanced visual effects, depending on what your video card supports.Ubuntu uses the Compiz Fusion software to provide advanced video features for the desktop. The Compiz Fusion software package is an open-source product that combines 3-D desktop features with advanced window management features using plug-ins. Ubuntu includes a generous sampling of plug-ins to provide lots of fancy graphical features for the desktop environment. Ubuntu provides two interfaces you can use to control the Compiz Fusion features enabled on the desktop.

Basic Visual Effects
The Appearance Preferences window provides the Visual Effects tab for enabling or disabling the level of animated effects used on the desktop. To get there, select the System ➪ Preferences ➪ Appearance entry from the Panel menu, then select the Visual Effects tab. This window provides three generic settings for feature levels:
  • None: No Compiz Fusion elements are enabled for the video card.
  • Normal: Enables a few basic video elements, such as fading windows, to enhance the desktop look and feel.
  • Extra: Enables advanced video features such as wobbly windows when you move a window, animations for windows operations, and extra window decorations to liven up your Ubuntu desktop.
The default setting depends on the capabilities of your video card. For basic video cards with no advanced features, Ubuntu sets this option to None by default. Unless you have a really old video card, you should be able to set this value to the Normal level. If you have an advanced video card in your workstation, try the Extra setting and see the extra effects in action!

Advanced Visual Effects
The Visual Effects settings provide a generic way to enable effects on your workstation. If your Ubuntu workstation is connected to the Internet, you can customize the Compiz Fusion visual effects settings by installing the CompizConfig Settings Manager. This tool allows you to enable and disable individual visual effects to liven up your desktop (if you have an advanced video card). To install the CompizConfig Settings Manager, follow these steps:
  1. Start the Add/Remove Applications program by selecting Applications ➪ Add/Remove from the Panel menu.
  2. Select the All section on the left side list and ensure that All Open Source applications is set for the Show drop-down box.
  3. Type compiz in the search dialog box, then hit the Enter key. The search results will appear in the top application list, with the descriptions appearing below the list.
  4. Check the box for Advanced Desktop Effects Settings (ccsm).
  5. Click the Apply Changes button to begin the installation. The Add/Remove Applications program will ask you to confirm your selection, then ask for your password to start the installation.
  6. Close the Add/Remove Applications window.
Once you’ve installed the CompizConfig Settings Manager, you can access it by selecting System ➪ Preferences ➪ CompizConfig Settings Manager from the Panel menu.
The visual effects are divided into eight sections, shown in Table 16-1. Each section contains a set of related plug-ins you can enable on your desktop. Each Compiz Fusion plug-in provides a different visual effect feature. Besides enabling an individual visual effect, you can customize it. Select a plug-in from the list to view and change the settings for the plug-in.
Each plug-in has its own settings panel, which enables you to completely customize your desktop
experience.

Monitor and Video Cards
In the old days of Ubuntu (up to the 7.10 Gutsy Gibbon release), the Screens and Graphics utility was included so that you could manually change the video card and monitor settings for X.Org.
Because the Ubuntu developers are striving for automatic detection of the video environment using the Screen Resolution(Display) utility, the manual Screens and Graphics utility has been removed from the menu system. However, for the time being it’s still included in the Ubuntu distribution(until 8.10), and you can use it to customize your video environment. First, you need to add the Screens and Graphics utility to your menu. Follow thesesteps:
  1. Select System ➪ Preferences ➪ Main Menu from the top panel. The Main Menu editor appears.
  2. Click the Applications section drop-down arrow, then select the Other section heading. The list of other applications available appears on the right side of the window.
  3. Select the check box next to the Screens and Graphics entry, then click the Close button.
This process adds the Other section to the Applications menu area and places the Screens and Graphics menu item in that section. To start it, just select Applications ➪ Other ➪Screens and Graphics from the Panel menu. Now you’re ready to manually configure your video card and monitor. The Screen tab shows the monitors detected on the system. You can manually set the features for each monitor, including its model and resolution capabilities. If you have multiple monitors, you can designate the default monitor and the secondary monitor. You can also indicate whether they should be cloned or, if you extend the desktop, which part of the desktop
each one handles. The Graphics Card tab, lists the graphics cards detected on the system.This tab allows you to select the specific driver X.Org uses for the video card. You can select the driver by name used in the X.Org configuration file Driver entry, or you can select it from a list of video card manufacturers and models. Once you’ve selected the video card and monitor settings, you can test your selection by clicking the Test button. X.Org will temporarily switch the desktop into the mode defined by the settings. In the new mode, a dialog box appears, asking whether you want to keep the new settings or revert to the original settings. Don’t worry if your desktop is inoperable with the new settings. If you don’t respond to the dialog box within 20 seconds, X.Org automatically reverts to the original settings.
Try to configure your desktop video settings using the Screen Resolution(Display) utility, if at all possible. Using the Screens and Graphics utility may (and usually does) break the default xorg.conf configuration file generated by Ubuntu. However, if X.Org can’t automatically detect your video card or monitor, you have no choice
but to resort to the Screens and Graphics utility.

Using 3-D Cards
In the past, one of the weaknesses of the Linux environment was its support for advanced video games. Many games popular in the Microsoft Windows world use advanced graphics that require specialized 3-D video cards, which Linux systems couldn’t support. In the past, specialized 3-D video cards were notorious for not working in the Linux environment because video card vendors never took the fledgling Linux market seriously. However, things are slowly starting to change. Two major 3-D video card vendors, ATI and NVIDIA, have released Linux drivers for their advanced products, allowing game developers to enter the Linux world. There’s a catch, though. Both ATI and NVIDIA released Linux binary drivers but not the source code for their 3-D video cards. A true open-source project must include source code for the binary drivers. This has caused a dilemma for Linux distributions. A Linux distribution that includes ATI or NVIDIA binary drivers violates the true spirit of open-source software. However, if a Linux distribution doesn’t provide these drivers, it risks falling behind in the Linux distribution wars and losing market share.

Ubuntu 3-D Support
Ubuntu has decided to solve this problem by splitting the difference. Ubuntu can detect ATI and NVIDIA video cards during the installation process and can install the proprietary binary drivers to support them. Ubuntu calls these restricted hardware drivers. Although Ubuntu supplies restricted hardware drivers, it doesn’t support them in any way. When you first log into the desktop after installation, Ubuntu displays a warning dialog telling you that restricted drivers have been installed. After the installation, an icon appears on the top panel, indicating that a restricted hardware driver has been installed and offering the option of removing the restricted drivers and replacing them with lesserquality open-source drivers.

As with all things in the open-source programming world, there are current efforts to create open-source versions of many restricted hardware drivers. The Nouveau project is attempting to create a high-quality, open-source driver for operating NVIDIA cards in 3-D mode. At the time of this writing they’ve completed drivers for operating NVIDIA video cards in 2-D mode but haven’t finished the 3-D features. As Ubuntu progresses through new versions, it’s possible that a video card that once required a restricted driver will have an open-source driver in a newer distribution.
Viewing Restricted Hardware Drivers
You can view which restricted hardware drivers Ubuntu installed by using the Restricted
Hardware Driver Manager. Start the Restricted Driver Manager by selecting System ➪
Administration ➪ Hardware Drivers from the Panel menu. If restricted drivers for any hardware device have been loaded, they appear in this list. You can disable the restricted driver by removing the check mark from the box in the Enable column. You can also view the state of the installed driver.



FONTS

Most computer users don't even think about fonts. They just expect them to work and assume that text will look the same whether it's viewed onscreen, printed, or sent to another user in a document. However, font management is actually a surprisingly complex task due to the many subtle variations in the ways fonts can be created and used.

Fonts can be defined in a number of different ways and have a variety of file formats. Each operating system has its own method of managing and displaying them. Some fonts are designed as bitmaps to be displayed onscreen, while others are in vector format so they can scale up or down and be printed at high resolution. Some come as bundles that include both bitmap and vector formats in the same package, with one used for onscreen display and the other used in printing or to generate files in output formats such as PDF. And some come as families, with several variations such as bold and italic bundled together with the base font, providing much better results than working from a single base font and then applying such variations algorithmically.

Font Management with Defoma
Ubuntu uses Defoma, the " Debian Font Manager," to centralize and simplify font management across all applications. Applications can vary dramatically in how they manage fonts, so when a new font is installed on your computer, it's not always obvious how to tell each application that the font exists and where to find it. Defoma gets around this problem by allowing applications to register themselves by providing a Defoma configuration script. Then, when a new font is installed, Defoma works through all the configuration scripts and performs whatever action is necessary to enable the font for each application. The first thing you should do then is make sure that your system is configured to use Defoma to manage fonts. Run:

$ sudo dpkg-reconfigure defoma 

If Defoma is not currently set to manage fonts, you will be asked if you want to use it; answer Yes.

If your system has ended up in an unclean state with some manually installed fonts or applications that can't see some fonts, you can force Defoma to totally rebuild its configuration. This process rescans all your installed fonts and makes sure all registered applications have been updated to use them:

$ sudo defoma-reconfigure 


Onscreen Font-Rendering Preferences
Various displays have different characteristics, and what looks good on a CRT doesn't necessarily look good on an LCD. Ubuntu provides a number of font options through System>Preferences>Font You can change the default system fonts to suit your preferences, but if you have an LCD, the item to pay attention to is the subpixel smoothing option under Font Rendering. Each pixel in an LCD consists of three subpixels, one each for red, green, and blue. Subpixel smoothing takes the physical layout of the subpixels into account to display fonts as smoothly as possible. Advanced optionsare accessible through the Details button near the bottom right. From here, you can specify screen resolution, smoothing, hinting, and subpixel order.

Screen resolution
When the font renderer displays text onscreen, it needs to convert between various units to determine how large the text needs to be. Often font sizes are specified as points, which are units of measure that have been used (rather inconsistently!) for hundreds of years by printers. Nowadays, most people agree on one point being equal to 1/72nd of an inch, but if you tell your computer to display, for example, 18-point text, it needs to know the resolution of your display so it can figure out how many pixels are equivalent to 18/72nds (i.e., 1/4) of an inch on your particular screen. Screen resolution is usually expressed as dpi(dots per inch). To figure out the horizontal and vertical resolution of your screen, measure its width and height and then divide those values into the pixel dimensions set in System>Preferences>Screen Resolution(Display). For example, a typical so-called 17-inch LCD will have physical dimensions of about 13.3 inches by 10.75 inches and run at a native resolution of 1280x1024 pixels. That gives a horizontal resolution of 1280 ÷ 13.3 = 96.2 dpi, and a vertical resolution of 1024 ÷ 10.75 = 95.3 dpi. Close enough to call it 96 dpi for both.

By determining the actual physical resolution of your display and setting the correct Resolution value in the Font preferences, you can ensure that when your computer displays a font onscreen at a specific size, it will be scaled to appear at that actual size.

Smoothing
The Smoothing setting actually controls the level of antialiasing to apply when fonts are rendered. Antialiasing can have a dramatic impact on the clarity of fonts, particularly when displayed on an LCD. It smooths out jaggy corners and edges on fine lines by visually filling in gaps using surrounding pixels set to intermediate shades of grey. If you have an LCD, for the best-looking fonts, you should definitely select Subpixel as the Smoothing setting.

Hinting
Because computer screens operate at a much lower resolution than what we are used to seeing with printed material, fonts that are scaled down to a small size can sometimes suffer from effects whereby the shape and position of individual letters don't interact well with the pixel structure of the display itself, producing visible artifacts. For example, two letters next to each other that both have thin vertical lines may happen to fall slightly differently onto the pixel grid of the display, with the result that one line appears fatter than the other. A similar effect can occur with rounded letters, where fine curves may disappear or be inconsistent. Often the relative placement of letters will alter the visual effect of other letters around them. Hinting is the process of making tiny adjustments in the outline-filling process while rendering fonts to compensate for effects that might cause individual characters to appear differently from the way they were designed.

Doing accurate hinting requires more processor power whenever your computer needs to render large quantities of text, but the end result is text that appears smoother, more consistent and easier to read. You can choose from four hinting levels in Font Rendering Details:
  • None,
  • Slight,
  • Medium, and
  • Full.
The difference might seem subtle if you're not used to closely examining text and you don't know what to look for, but if you have a relatively modern machine, it's worth turning on hinting. LCDs in particular can benefit greatly from it, giving you much more readable text and less eyestrain.

Subpixel order
In the main Font Preferences dialog, there was an option to turn on subpixel smoothing, but for it to be really effective, you also need to make sure your computer knows the physical structure of the individual subpixels. In reality, subpixels are not dots: they're typically very short lines placed side by side. The vast majority of LCDs use an RGB order, but some reverse that and place the subpixels in BGR order. Then there are variations on those two options, with some manufacturers stacking subpixels vertically instead of placing them side by side. Selecting the option that matches your particular monitor structure will let your computer do the best job possible of smoothing fonts onscreen.

Install Microsoft Core Fonts
Microsoft Windows comes bundled with a number of core TrueType fonts. Because Windows is so widely used, many documents and web sites are designed around the core Microsoft fonts, and if you don't have them installed, your computer may not be able to display some documents as the author intended. Licence restrictions prevent the Microsoft fonts from being distributed directly as part of Ubuntu, but Microsoft does make them available for free download directly from its web site, and there is even an Ubuntu package that takes care of downloading and installing them for you:

$ sudo apt-get install msttcorefonts 

The msttcorefonts package is part of the multiverse repository, so it's not available on a standard Ubuntu installation and you may need to "Modify the List of Package Repositories" before you can install it. The package doesn't include the fonts themselves but instead connects to the Microsoft web site and downloads and installs them in the correct location on your computer. The fonts will then be available to applications the next time they start up.

Install Macintosh and Windows TrueType Fonts
Installing TrueType fonts is very easy on Ubuntu. On your desktop or in a file-browser window, just type Ctrl-L to access the Open Location window; then type fonts:/// and click Open. You will then see a list of all the fonts you currently have access to on your system. Drag your new TrueType font from your desktop or file manager into the font-list window, and it will be automatically installed and made available to applications through Defoma the next time they start up. It's actually not quite that simple if the fonts come from a Macintosh system, because Mac OS embeds extra font information using a special format that Linux can't read directly. Before you drag Mac OS fonts into your fonts:/// folder, you need to convert them with a utility called fondu, which you can install with the following command:
$ sudo apt-get install fondu 

Then copy your Mac OS font directory to your Linux machine and run:

$ fondu * 

inside it to generate converted TTF files.

The fonts:/// location isn't a real location in the filesystem. It's a virtual view that lets you manage the fonts that have been installed without having to worry about where they are actually located on disk. The fonts shown by default are the system-wide fonts that have been installed on your machine for all users to access, but when you drag a new font into the window, it actually stores it inside a hidden folder called .fonts inside your home directory.

Reference


  • Xorg Curators of the X Window System.
  • Xorg downloads Want to download the source to the latest revision of X? Start at this list of mirror sites.
  • XFree86 Project Home of The XFree86 Project, Inc., which has provided a graphical interface for Linux for nearly 10 years.
  • KDE The place to get started when learning about KDE and the latest developments.
  • GNOME The launch point for more information about GNOME, links to new clients, and GNOME development projects.


24 August 2009

Video and Audio Compression

from
A Practical Guide to Video and Audio Compression
Cliff Wootton 2005, Elsevier Inc.
ISBN: 0-240-80630-1


Intro


Video compression is all about trade-offs. Ask yourself what constitutes the best video experience for your customers. That is what determines where you are going to compromise. Which of these are the dominant factors for you?
  • Image quality
  • Sound quality
  • Frame rate
  • Saving disk space
  • Moving content around our network more quickly
  • Saving bandwidth
  • Reducing the playback overhead for older processors
  • Portability across platforms
  • Portability across players
  • Open standards
  • Licensing costs for the tools
  • Licensing costs for use of content
  • Revenue streams from customers to you
  • Access control and rights management
  • Reduced labor costs in production
You will need to weigh these factors against each other. Some of them are mutually exclusive.
The actual compression process itself is almost trivial in comparison to the contextual setting (the context in which the video is arriving as well as the context where it is going to be deployed once it has been processed) and the preprocessing activity. It is not necessary to use mathematical theory to understand compression.



What Is a Video Compressor?
All video compressors share common characteristics. In fact, these terms describe the step-by-step process of compressing video:
  • Frame difference
  • Motion estimation
  • Discrete cosine transformation
  • Entropy coding
Video compression is only a small part of the end-to-end process. That process starts with deciding what to shoot, continues through the editing and composition of the footage, and usually ends with delivery on some kind of removable media or broadcast system. In a domestic setting, the end-to-end process might be the capture of analogue video directly off the air followed by digitization and efficient storage inside a home video server. This is what a TiVo Personal Video Recorder (PVR) does, and compression is an essential part of how that product works.
  • There is usually a lot of setting up involved before you ever compress anything. Preparing the content first so the compressor produces the best-quality output is very important. A rule of thumb is that about 90% of the work happens before the compression actually begins.
The rule of thumb:
  • about 90% of the coverage is about things you need to know in order to utilize that
  • 10% of the time you will actually spend compressing video in the most effective way possible.
The word codec is derived from coder–decoder and is used to refer to both ends of the process—squeezing video down and expanding it to a viewable format again on playback.
Compatible coders and decoders must be used, so they tend to be paired up when they are delivered in a system like QuickTime or Windows Media. Sometimes the coder is provided for no charge and is included with the decoder. Other times you will have to buy the coder separately. By the way, the terms coder and encoder in general refer to the same thing.

Hot-pluggable connections are those that are safe to connect while your equipment is turned on. This is, in general, true of a signal connection but not a power connection. Some hardware, such as SCSI drives, must never be connected or unconnected while powered on. On the other hand, Firewire interfaces for disk drives are designed to be hot pluggable.
It is important to know whether we are working with high-definition or standard-definition content. Moving images shot on film are quite different from TV pictures due to the way that TV transmission interlaces alternate lines of a picture.
Interlacing separates the odd and even lines and transmits them separately. It allows the overall frame rate to be half what it would need to be if the whole display were delivered progressively. Thus, it reduces the bandwidth required to 50% and is therefore a form of compression.
Interlacing is actually a pretty harsh kind of compression given the artifacts that it introduces and the amount of processing complexity involved when trying to eliminate the unwanted effects. Harsh compression is a common result of squashing the video as much as possible, which often leads to some compromises on the viewing quality. The artifacts you can see are the visible signs of that compression.
Because the sampling and compression of audio and video are essentially the same, artifacts that affect one will affect the other. They just present themselves differently to your ears and eyes.

This is content that is delivered to you as a continuous series of pictures and your system has to keep up. There is little opportunity to pause or buffer things to be dealt with later. Your system has to process the video as it arrives. It is often a critical part of a much larger streaming service that is delivering the encoded video to many thousands or even millions of subscribers. It has to work reliably all the time, every time. That ability will be compromised if you make suboptimum choices early on. Changing your mind about foundational systems you have already deployed can be difficult or impossible.

How we store video in files? Some applications require particular kinds of containers and will not work if you present your video in the wrong kind of file. It is a bit like taking a flight with a commercial airline. Your suitcase may be the wrong size or shape or may weigh too much. You have to do something about it before you will be allowed to take it on the plane. It is the same with video. You may need to run some conversions on the video files before presenting the contents for compression.

In the context of video encoding, we have to make sure the right licenses are in place. We need
rights control because the content we are encoding may not always be our own. Playback
clients make decisions of their own based on the metadata in the content, or they can interact
with the server to determine when, where, and how the content may be played. Your playback client is the hardware apparatus, software application, movie player, or web page plug-in that you use to view the content.

Where do you want to put your finished compressed video output? Are you doing this so you can archive some content? Is there a public-facing service that you are going to provide? This is often called deployment. It is a process of delivering your content to the right place

How your compressed video is streamed to your customers? Streaming comes in a variety of formats. Sometimes we are just delivering one program, but even then we are delivering several streams of content at the same time. Audio and video are processed and delivered to the viewer independently, even though they appear to be delivered together. That is actually an illusion because they are carefully synchronized. It is quite obvious when they are not in sync, however, and it could be your responsibility to fix the problem.

About the client players for which you are creating your content: Using open standards helps to reach a wider audience. Beware of situations where a specific player is mandated. This is either because you have chosen a proprietary codec or because the open standard is not supported correctly. That may be accidental or purposeful. Companies that manufacture encoders and players will sometimes advertise that they support an open standard but then deliver it inside a proprietary container.

You are likely to hit a few bumps along the way as you try your hand at video compression. These will manifest themselves in a particularly difficult-to-encode video sequence. You will no doubt have a limited bit rate budget and the complexity of the content may require more data than you can afford to send. So you will have to trade off some complexity to reduce the bandwidth requirements. Degrading the picture quality is one option, or you can reduce the frame rate. The opportunities to improve your encoded video quality begin when you plan what to shoot.



Conventions

  • Film size is always specified in metric values measured in millimeters (mm).
  • Sometimesscanning is described as dots per inch or lines per inch.
  • TV screen sizes are always described in inches measured diagonally. Most of the time, this won’t matter to us, since we are describing digital imagery measured in pixels. The imaging area of film is measured in mm, and therefore a film-scanning resolution in dots per mm seems a sensible compromise.
TV pictures generally scan with interlaced lines, and computers use a progressive scanning layout. The difference between them is the delivery order of the lines in the picture. Frame rates are also different.
The convention for describing a scanning format is to indicate the number of physical lines, the scanning model, and the field rate. For interlaced displays, the field rate is twice the frame rate, while for progressive displays, they are the same
For example, 525i60 and 625i50 describe the American and European display formats, respectively.

In the abbreviations we use, note that uppercase B refers to bytes, and lowercase b is
bits. So GB is gigabytes. When we multiply bits or bytes by each increment, the value 1000 is actually replaced by the nearest equivalent base-2 number. So we multiply memory size by 1024 instead of 1000 to get kilobytes.

The MPEG-4 part 10, otherwise known as H.264 codec, is part of a family of video encoders that is listed below
engineering people use the term H.264 and commercial or marketing people prefer AVC.

Further confusion arises during discussion of the Windows Media codecs, since they have been lodged with SMPTE for ratification as an open standard. All of the naming conventions in Table 1-3 have been used in documents about video compression and codecs: Unless it is necessary to refer to the Windows Media codec by a different alias, the term VC-1 will be used here as far as possible





We Need Video Compression?


There are quite a few products and services available today that just wouldn’t be possible without compression. Many more are being developed.

Delivering digital video and audio through the available networks is simply impossible without compressing the content first.
To give you some history, there has been a desire to deliver TV services through telephone networks for many years. Trials were carried out during the 1980s. Ultimately, they were all unsuccessful because they couldn’t get the information down the wire quickly enough. Now we are on the threshold of being able to compress TV services enough that they can fit into the bandwidth being made available to broadband users. The crossing point of those two technologies is a very important threshold. Beyond it, even more sophisticated services become available as the broadcast on-air TV service comes to occupy a smaller percentage of the available bandwidth. So, as bandwidth increases and compressors get better, all kinds of new ways to enjoy TV and Internet services come online. For example, a weather forecasting service
could be packaged as an interactive presentation and downloaded in the background. If this is cached on a local hard disk, it will always be available on demand, at an instant’s notice. An updated copy can be delivered in the background as often as needed. Similar services can be developed around airline flight details, traffic conditions, and sports results.


Compression Is About Trade-Offs

Compressing video is all about making the best compromises possible without giving up too much quality. To that end, anything that reduces the amount of video to be encoded will help reduce the overall size of the finished output file or stream.

Compression is not only about keeping overall file size small. It also deals with optimizing data throughput—the amount of data that will steadily move through your playback pipeline and get onto the screen.
  • If you don’t compress the video properly, it will not fit the pipe and therefore cannot be streamed in real time.
  • Reducing the number of frames to be delivered helps reduce the capacity required,but the motion becomes jerky and unrealistic. Keeping the frame count up may mean you have to compromise on the amount of data per frame. That leads to loss of quality and a blocky appearance. Judging the right setting is difficult, because certain content compresses more easily, while other material creates a spike in the bit rate required. That spike can be allowed to momentarily absorb a higher bit rate, in which case the quality will stay the same. Alternatively, you can cap the bit rate that is available. If you cap the bit rate, the quality will momentarily decline and then recover after the spike has passed. A good example of this is a dissolve between two scenes when compressed using MPEG-2 for broadcast TV services operating within a fixed and capped bit rate.


First We Have to Digitize

Although some compression can take place while video is still in an analog form,
  • we only get the large compression ratios by first converting the data to a digital representation Converting from analog to digital form is popularly called digitizing.
  • and then reducing the redundancy.
We now have techniques for digitally representing virtually every thing that we might consume.
The whole world is being digitized, but we aren’t yet living in the world of The Matrix.
Digitizing processes are normally only concerned with creating a representation of a view. Video structure allows us to isolate a view at a particular time, but unless we apply a lot more processing, we cannot easily isolate objects within a scene or reconstruct the 3D spatial model of a scene.

Software exists that can do that kind of analysis, but it is very difficult. It does lead to very efficient compression, though. So standards like MPEG-4 allow for 3D models of real-world objects to be used. That content would have the necessary structure to exploit this kind of compression because it was preserved during the creation process. Movie special effects use 3D-model and 2D-view digitizing to combine artificially created scene components and characters with real-world pictures. Even so, many measurements must still be taken when the plates (footage) are shot.


Spatial Compression
Spatial compression squashes a single image. The encoder only considers that data, which is self-contained within a single picture and bears no relationship to other frames in a sequence. This process we use it all the time when we take pictures with digital still cameras and upload them as a JPEG file. GIF and TIFF images are also examples of spatial compression. Simple video codecs just create a sequence of still frames that are coded in this way. Motion JPEG is an example in which every frame is discrete from the others.

The process starts with uncompressed data that describes a color value at a Cartesian (or X–Y) point in the image. Figure 2-1 shows a basic image pixel map. The next stage is to apply some run-length encoding, which is a way of describing a range of pixels whose value is the same.

Descriptions of the image, such as “pixels 0,0 to 100,100 are all black,” are recorded in the file. A much more compact description is shown in Figure 2-2. This coding mechanism assumes that the coding operates on scan lines. Otherwise it
would just describe a diagonal line.
The run-length encoding technique eliminates much redundant data without losing quality. A lossless compressor such as this reduces the data to about 50% of the original size,
depending on the image complexity. This is particularly good for cell-animated footage.

The TIFF image format uses this technique and is sometimes called LZW compression after its inventors, Lempel, Ziv, and Welch. Use of LZW coding is subject to some royalty fees if you want to implement it, because the concepts embodied in it are patented. This should be included in the purchase price of any tools you buy.

The next level of spatial compression in terms of complexity is the JPEG technique, which breaks the image into macroblocks and applies the discrete cosine transform (DCT). This kind of compression starts to become lossy. Minimal losses are undetectable by the human eye, but as the compression ratio increases, the image visibly degrades. Compression using the JPEG technique reduces the data to about 10% of the original size.


Temporal Compression
Video presentation is concerned with time and the presentation of the images at regular
intervals. The time axis gives us extra opportunities to save space by looking for redundancy
across multiple images.

This kind of compression is always lossy. It is founded on the concept of looking for differences between successive images and describing those differences, without having to repeat the description of any part of the image that is unchanged.

Spatial compression is used to define a starting point or key frame. After that, only the differences are described. Reasonably good quality is achieved at a data rate of one tenth of the original data size of the original uncompressed format. Research efforts are underway to investigate ever more complex ways to encode the video without requiring the decoder to work much harder. The innovation in encoders leads to significantly improved compression factors during the player deployment lifetime without needing to replace the player.

A shortcut to temporal compression is to lose some frames, however it is not recommended.
In any case, it is not a suitable option for TV transmission that must maintain the frame rate.


Why Do I Need Video Compression?
Service providers and content owners are constantly looking for new avenues of profit from the material they own the rights to. For this reason, technology that provides a means to facilitate the delivery of that content to new markets is very attractive to them. Content owners require an efficient way to deliver content to their centralized repositories. Cheap and effective ways to provide that content to end users are needed, too. Video compression can be used at the point where video is imported into your workflow at the beginning of the content chain as well as at the delivery end. If you are using video compression at the input, you must be very careful not to introduce undesirable artifacts. For archival and transcoding reasons, you should store only uncompressed source video if you can afford sufficient storage capacity.



Some Real-World Scenarios

Let’s examine some of the possible scenarios where video compression can provide assistance.
In some of these examples, video compression enables an entire commercial activity that simply would not be possible otherwise. We’ll take a look at some areas of business to see how compression helps them.


Mobile Journalism

News-gathering operations used to involve a team of people going out into the field to operate bulky and very expensive equipment. As technology has progressed, cameras have gotten smaller and easier to use. A film crew used to typically include a sound engineer, camera person, and producer, as well as the journalist being filmed. These days, the camera is very likely carried by the journalist and is set up to operate automatically.

Broadcast news coverage is being originated on videophones, mini-cams, and video enabled
mobile-phone devices. The quality of these cameras is rapidly improving. To maintain a comfortable size and weight for portable use, the storage capacity in terms of hardware has very strict limits. Video compression increases the capacity and thus the recording time available by condensing the data before recording takes place.
Current practice is to shoot on a small DV camera, edit the footage on a laptop, and then send it back to base via a videophone or satellite transceiver. The quality will clearly not be the same as that from a studio camera, but it is surprisingly good even though a high compression ratio is used.
Trials are underway to determine whether useful results can be obtained with a PDA device fitted with a video camera and integral mobile phone to send the material back to a field headquarters. The problem is mainly one of picture size and available bandwidth for delivery.


Online Interactive Multi-Player Games

Multi-player online gaming systems have become very popular in recent years. The realism of the visuals increases all the time. So, too, does the requirement to hurl an ever growing quantity of bits down a very narrow pipe. The difficulty increases as the games become more popular, with more streams having to be delivered simultaneously. Online games differ significantly from normal video, because for a game to be compelling, some aspects of what you see must be computed as a consequence of your actions. Otherwise, the experience is not interactive enough.

There are some useful techniques to apply that will reduce the bit rate required. For example, portions of the image can be static. Static images don’t require any particular bit rate from one frame to the next since they are unchanged. Only pixels containing a moving object need to be delivered. More sophisticated games are evolving, and interactivity becomes more interesting
if you cache the different visual components of the scene in the local player hardware and then composite them as needed. This allows some virtual-reality (VR) techniques to be employed to animate the backdrop from a large static image.

Nevertheless, compression is still required in order to shrink these component assets down to a reasonable size, even if they are served from a local cache or CD-ROM. New standards-based codecs will facilitate much more sophisticated game play. Codecs such as H.264 are very efficient. Fully exploiting the capabilities of the MPEG-4 standard will allow you to create non-rectangular, alpha-blended areas of moving video. You could map that video onto a 3D mesh that represents some terrain or even a face. The MPEG-4 standard also provides scene construction mechanisms so that video assets can be projected into a 3D environment at the player. This allows the user to control the point of view. It also reduces the bit rate required for delivery, because only the flat, 2D versions of the content need to be delivered as component objects. As the scene becomes more realistic, video compression helps keep games like FPS etc small enough to deploy online or on some kind of sell-through, removable-disk format.


Online Betting

Betting systems are sometimes grouped together with online gaming, and that may be appropriate in some cases. But online gaming is more about the interaction between groups of users and may involve the transfer of large amounts of data on a peer-to-peer basis.

Betting systems can be an extension of the real-world betting shop where you place your wager and watch the outcome of the horse race or sports event on a wall of monitor screens. The transfer of that monitor wall to your domestic PC or TV screen is facilitated by efficient and cheap video compression. Real-time compression comes to the fore here because you cannot introduce more than fractions of a second of delay—the end users have wagered their own money and they expect the results to arrive in a timely manner.

Another scenario could involve a virtual poker game. These are often based around VR simulations of a scene, but with suitable compression a live game could be streamed to anyone who wants to dial in and watch. Virtualizing a pack of cards is possible by simulating the cards on the screen, and a video-conferencing system could be used to enable observation of facial expressions of the other players in the game.


Sports and News coverage

Of all the different genres of content that broadcasters provide to end users, news and sports have some particularly important criteria that directly affect the way that video is compressed for presentation.

News and sports are both very information-rich genres. Archiving systems tend to be large in both cases because there is a lot of material available. The metadata associated with the content assists the searching process and also facilitates the digital rights management (DRM) process. The content is easily accessible and widely available, but the playback can be controlled. Video may need to be encrypted as well as encoded. Other technologies such as watermarking are used, and these present additional technical problems. In general, the rights protection techniques that are available impose further loads on an already hardworking compression system.

The nature of news content is that the material must be encoded quickly and presented as soon after the event as possible. The same is true of sports coverage, and services that present the highlights of a sporting event need to be able to select and encode fragments of content easily, quickly, and reliably. These demands lead to the implementation of very large infrastructure projects such as the BBC Colledia-based Jupiter system deployed in its news division. This facilitates the sharing of media assets as soon as they start to arrive. Editing by multiple teams at the same time is possible, and the finished packages are then routed to transmission servers in a form that is ready to deploy to the national TV broadcast service as well as to the Internet.


Advertising

Advertising on the Internet is beginning use video to present more compelling content. The newer codecs such as H.264 allow the macroblocks to be presented in quite sophisticated
geometrical arrangements. It is now feasible to fit video into the traditional banner advertising rectangles that have a very different aspect ratio from normal video. Creating the content may need to be done using video editing tools that allow non-standard raster sizes to be used.
More information about these standard sizes is available at the Interactive Advertising Bureau(IAB) Web site.


Video Conferencing
Large corporations have used video conferencing for many years. As far back as the 1980s,
multinational corporations were prepared to permanently lease lines from the telecommunications companies in order to link headquarters offices in the United States with
European offices. This generally required a dedicated room to be set aside and was sufficiently
expensive that only one video-conferencing station would be built per site. Only one group of people could participate at a time, and the use of the technology was reserved for important meetings.

Video conferencing can now be deployed to a desktop or mobile phone. This is only possible because video compression reduces the data-transfer rate to a trickle compared with the systems in use just a few years ago. Video conferencing applications currently lack the levels of interoperability between competing systems that telephone users enjoy for speech. That will come in time. For now, the systems being introduced are breaking new ground in making this available to the general public and establishing the fundamental principles of how the infrastructure should support it.
An example of an advanced video-conferencing user interface that supports multiple simultaneous users is available in the MacOS X version 10.4 operating system and is called iChat AV.


Remote Medicine
The use of remote apparatus and VR techniques for medicine is starting to facilitate so called
telemedicine,” where an expert in some aspect of the medical condition participates in a surgical operation being performed on the other side of the world. Clearly there are issues here regarding the need for force feedback when medical instruments are being operated remotely. Otherwise, how can the operating surgeon “feel” what the instrument is doing on the remote servo-operated system? Game players have used force-feedback systems for some time. The challenge is to adapt this for other situations and maintain a totally synchronized remote experience. Video compression is a critical technology that allows multiple simultaneous camera views to be delivered over long distances. This will also work well for MRI, ultrasound, and X-ray-imaging systems that could all have their output fed in real time to a remote surgeon. The requirements here are for very high resolution. Xray images need to be digitized in grayscale to increased bit depths and at a much higher resolution than TV. This obviously increases the amount of data to be transferred.


Remote Education
Young people often have an immediate grasp of technology and readily participate in interactive games and educational uses of video and computing systems. The education community has fully embraced computer simulation, games, and interactive software. Some of the most advanced CD-ROM products were designed for educational purposes. With equal enthusiasm, the education community has embraced the Internet, mainly by way of Web sites. Video compression provides opportunities to deploy an even richer kind of media for use in educational systems. This enhances the enjoyment of consumers when they participate. Indeed, it may be the only way to enfranchise some special-needs children who already have learning difficulties.


Online Support and Customer Services
Online help systems may be implemented with video-led tuition. When designing and
implementing such a system, it is important to avoid alienating the user. Presenting users with an experience that feels like talking to a machine would be counterproductive. Automated answering systems already bother some users due to the sterile nature of the interchange. An avatar-based help system might fare no better and present an unsatisfying experience unless it is backed up by well-designed artificial intelligence.


Entertainment

Online gaming and betting could be categorized as entertainment. Uses of video compression
with other forms of entertainment are also possible. DVD sales have taken off faster than anyone could ever have predicted. They are cheap to manufacture and provide added-value features that can enhance the viewer’s enjoyment. The MPEG-4 standard offers packaging for interactive material in a way that the current DVD specification cannot match. Hybrid DVD disks with MPEG-4 interactive content and players with MPEG-4 support could herald a renaissance in content authoring similar to what took place in the mid-1990s with CD-ROMs.
The more video can be compressed into smaller packages without losing quality, the better the experience for the viewer within the same delivery form factor (disk) or capacity (bit rate). The H.264 codec is being adopted widely as the natural format for delivering high definition TV (HDTV) content on DVD and allows us to store even longer definition programs on the existing 5-GB and 9-GB disks.


Religion

All of the major religions have a presence on the Internet. There are Web sites that describe their philosophy, theology, and origins. Video compression provides a way to involve members of the community who may not be physically able to attend ceremonies. They may even be able to participate through a streamed-video broadcast. This may well be within the financial reach of medium to large churches, and as costs are reduced, even small communities may be able to deploy this kind of service. There are great social benefits to be gained from community-based use of video-compression systems. Such applications could be built around video-conferencing technologies quite inexpensively.


Commerce

Quite unexpectedly, the shopping channel has become one of the more popular television
formats. This seems to provide an oddly compelling kind of viewing. Production values are very low cost, and yet people tune in regularly to watch. Broadcasting these channels at 4.5 megabits per second (Mbps) on a satellite link may ultimately prove to be too expensive. As broadband technology improves its reach to consumers, these channels could be delivered at much lower bit rates through a networked infrastructure.

A variation of this that can be combined with a video-conferencing system is the business-to-business application. Sales pitches; demos; and all manner of commercial meetings, seminars, and presentations could take place courtesy of fast and efficient video compression.


Security and Surveillance

Modern society requires that a great deal of our travel and day-to-day activity take place under surveillance. A commuter traveling from home to a railway station by car, then by train, and then on an inner-city rapid-transit system may well be captured by as many as 200 cameras between home and the office desk. That is a lot of video, which until recently has been recorded on VHS tapes, sometimes at very low resolution, at reduced frame rates, and presented four at once in quarter-frame panes in order to save tape.

Newer systems are being introduced that use video compression to preserve video quality, increase frame rates, and automate the storage of the video on centralized repositories. By using digital video, the searching and facial-recognition systems can be connected to the repository. Suspects can be followed from one camera to another by synchronizing the streams and playing them back together. This is a good thing if it helps to trace and then arrest a felon. Our legislators have to draw a very careful line between using this technology for the good of society as a whole and infringing on our rights to go about our daily lives without intervention by the state. You may disagree with or feel uncomfortable about this level of surveillance, but it will likely continue to take place.


Compliance Recording

Broadcasters are required to record their output and store it for 90 days, so that if someone
wants to complain about something that was said or a rights issue needs to be resolved, the evidence is there to support or deny the claim. This is called compliance recording, and historically it was accomplished through a manually operated bank of VHS recorders running in LP mode and storing 8 hours of video per tape, requiring three cassettes per day per channel. The BBC outputs at least six full-frame TV services that need to be monitored in this way. The archive for 90 days of recording is some 1620 tapes. These all have to be labeled, cataloged, and stored for easy access in case of a retrieval request. The TX-2 compliance recorder was built on a Windows platform and was designed according to the requirements of the regulatory organizations so that UK broadcasters could store 90 days’ worth of content in an automated system. The compliance recorder is based on a master node with attached slaves, which can handle up to 16 channels in a fly configured system. Access to the archived footage is achieved via a Web-based interface, and the video is then streamed back to the requesting client. This recorder could not have been built without video compression, and it is a good
example of the kind of product that can be built on top of a platform such as Windows Media running on a Windows operating system, or other manufacturer’s technology. Because this is a software-based system, the compression ratio and hence the capacity and quality of the video storage can be configured. Less video but at a higher quality can be stored, or maximal time at low quality. The choice is yours.


Conference Proceedings

Using large-screen displays at conferences is becoming very popular. These are being driven by a video feed shot by professional camera people, and the video is often captured and made available to delegates after the conference. Siggraph conference proceedings, for example, make significant use of compression to create the DVD proceedings disk, and the Apple developer conference proceedings have for some years been a showcase of Apple’s prowess with video workflow and production processes as well as its engineering work on codecs.


Broadband Video on Demand

During 2004, the BBC tested a system called the Internet Media Player (BBC iMP). This system presents an electronic program guide (EPG) over a 14-day window. The user browses the EPG listings and is able to call up something that was missed during the previous week. Alternatively, a recording can be scheduled during the next few days. In order to adequately protect the content, the BBC iMP trials are run on a Windows-based platform that supports the Windows Media DRM functionality. If the iMP player were used on a laptop connected to a fixed broadband service, the downloaded material could be taken on the road and viewed remotely. This enables video to be as mobile as music carried around on Walkman and iPod devices. Future experiments in the area of broadband-delivered TV will explore some interesting peer-to-peer file techniques, which are designed to alleviate the bandwidth burden
on service providers. For this to work, we must have reliable and robust DRM solutions, or the super-distribution model will fail to get acceptance from the content providers.


Home Theatre Systems

Hollywood movies are designed to be viewed on a large screen in a darkened room with a surround-sound system. There is now a growing market for equipment to be deployed at home to give you the same experience. The media is still mostly available in standard definition but some high-definition content is being broadcast already. More high-definition services will be launched during the next few years. Plasma, LCD, or LED flat screens are available in sizes up to 60 inches diagonal. If you want to go larger than that, you will need to consider a projection system. At large screen sizes, it helps to increase the resolution of the image that is being
projected, and that may require some special hardware to scale it up and interpolate the additional pixels. At these increased screen sizes, any artifacts that result from the compression
will be very obvious. Higher bit rates will be necessary to allow a lower compression ratio. Some DVD products are shipped in special editions that give up all the special features in order to increase the bit rate. The gradual advancement of codec technology works in your favor. New designs yield better performance for the same bit rate as technology improves. The bottom line is that compressing video to use on a standard-definition TV set may not be good enough for home-cinema purists.


Digital Cinema

Interestingly, the high-definition TV standards that are emerging seem to be appropriate for use in digital-cinema (D-cinema) situations. The same content will play in the domestic
environment just as easily. As high-definition TV becomes more popular and more people install home theatre systems, commercial cinema complexes will need to develop their
business in new ways. They will have to do this in order to differentiate their product and give people a reason to visit the cinema instead of watching the movie at home.



Platforms
With the increasing trends toward technological convergence, devices that were inconceivable
as potential targets for video content are now becoming viable. Science fiction writers have been extolling the virtues of portable hand-held video devices for years, and now the technology is here to realize that capability. In fact, modern third-generation mobile phones are more functional and more compact than science fiction writers had envisaged being available hundreds of years into the future. Handheld video, and touch-screen, flat-screen, and large-screen video, are all available here and now. They are being rolled out in a front room near you right this minute. What we take for granted and routinely use every day is already way beyond the futuristic technologies of the Star Trek crew.


Portable Video Shoot and Edit

Portable cameras have been around for a long time. Amateur film formats were made available to the consumer as 8-mm home movie products; they replaced earlier and more unwieldy film gauges. The 8-mm formats became increasingly popular in the 1950s and ‘60s. The major shortcomings of these were that they held only enough footage to shoot 4 minutes, and most models required that the film be turned over halfway through, so your maximum shot length was only 2 minutes. At the time, battery technology was less sophisticated than what we take for granted now, and many cameras were driven by clockwork mechanisms. These devices were displaced quite rapidly with the introduction of VHS homevideo systems in the late 1970s. Several formats were introduced to try and encourage mass appeal. But editing the content was cumbersome and required several expensive four-head video recorders. Just after the start of the new millennium, digital cameras reached a price point that was affordable for the home-movie enthusiast.

Now that the cameras can be fitted with Firewire interfaces (also called iLink and IEEE 1394), their connection to a computer has revolutionized the video workflow. These cameras use the digital video (DV) format that is virtually identical to the DVCAM format used by professional videographers and TV companies. The DV format was originally conceived by Sony as digital 8-mm tape for use in Sony Handycam® recorders. The current state of the art that is represented by a system such as an Apple Macintosh G4 12-inch laptop with a FireWire connection to a Sony DCR PC 105 camera. The camera and laptop fit in a small briefcase. This combination is amazingly capable for a very reasonable total purchase price of less than $3000. The Apple laptop comes already installed with the iMovie video-editing software that is sufficient to edit and then burn a DVD (with the iDVD application). You can walk out of the store with it and start working on your movie project right away. Of course, there are alternative software offerings, and other manufacturers’ laptops support the same functionality. Sony VAIO computers are very video capable because they are designed to complement Sony’s range of cameras, and the Adobe Premier and Avid DV editing systems are comparable to the Apple Final Cut Pro software if you want to use Windows-based machines.
This is all done more effectively on desktop machines with greater processing power. The laptop solution is part of an end-to-end process of workflow that allows a lot of work to be done in the field before content is shipped back to base.


Video Playback on Handheld Devices
Handheld video playback is becoming quite commonplace. There are several classes of device available depending on what you need. Obviously, the more sophisticated they are, the more expensive the hardware. There is a natural convergence here, so that ultimately all of these capabilities may be found in a single generic device. These fall into a family of mobile video devices that include
  • Portable TV sets supporting terrestrial digital-TV reception
  • Portable DVD viewers
  • Diskless portable movie players
  • PDA viewers

Video Phones
The new generation of mobile-phone devices is converging with the role of the handheld
personal digital assistant (PDA). These mobile phones are widely available and have cameras
and video playback built in. They also have address books and other PDA-like applications,
although these may be less sophisticated than those found in a genuine PDA. Some services were being developed for so-called 2.5G mobile phones, but now that the genuine 3G phones are shipping, they will likely replace the 2.5G offerings.


H.264 on Mobile Devices
H.264 is designed to be useful for mobile devices and consumer playback of video. Rolling this standard out for some applications must take account of the installed base of players, and that will take some time. So it is likely that, initially, H.264 will be used primarily as a mobile format.


The Ultimate Handheld Device

Taking the capabilities of a portable TV, DVD player, PDA, and mobile phone and integrating
them into a single device gets very close to the ultimate handheld portable media device. Well it might, if the capabilities of a sub-notebook computer are included. A fair use policy is now required for consumer digital video that allows us to transfer our legitimately purchased DVD to a “memory card” or other local-storage medium. These memory cards are useful gadgets to take on a long journey to occupy us as we travel, but the content owners are not comfortable with us being able to make such copies. There are still issues with the form factor for a handheld device like this. To create a viewing experience that is convenient for long journeys, we might end up with a device that is a little bulkier than a phone should be. Maybe a hands-free kit addresses that issue or possibly a Bluetooth headset. Usable keypads increase the size of these devices. Currently, it is quite expensive to provide sufficient storage capacity without resorting to an embedded hard disk. That tends to reduce the battery life, so we might look to the new storage technologies that are being developed. Terabyte memory chips based on holographic techniques may yield the power–size–weight combination that is required. Newer display technologies such as organic LED devices may offer brighter images with less
power consumed. Cameras are already reduced to a tiny charged cathode device (CCD)
assembled on a chip, which is smaller than a cubic centimeter. The key to this will be standardization. Common screen sizes, players, video codecs, and connection protocols could enable an entire industry to be built around these devices. Open standards facilitate this sort of thing, and there are high hopes that H.264 (AVC) and the other parts of the MPEG-4 standard will play an important role here.


Personal Video Recorders
Personal video recorders (PVRs) are often generically referred to as TiVo, although they are
manufactured by a variety of different companies. Some of them do indeed license the TiVo software, but others do not. Another popular brand is DirecTV.


Analog Off-Air PVR Devices
A classic TiVo device works very hard to compress incoming analog video to store it effectively and provide trick-play features. The compression quality level can be set in the preferences. The compromise is space versus visible artifacts. At the lowest quality, the video is fairly noisy if the picture contains a lot of movement. This is okay if you are just recording a program that you don’t want to keep forever—for example, just a time shift to view the program at a different time. If you want to record a movie, you will probably choose a higher-quality recording format than you would for a news program. The functionality is broadly divided into trick-play capabilities and a mechanism to ensure that you record all the programs you want to, even if you do not know when they were going to be aired. In the longer term, these devices scale from a single-box solution up to a home media server with several connected clients. This would be attractive to schools for streaming TV services directly to the classroom. University campus TV, hospital TV services, and corporate video-distribution services are candidates. Standards-based solutions offer good economies of scale, low thresholds of lock-in to one supplier, and good commercial opportunities for independent content developers.


Digital Off-Air PVR Devices

When digital PVR devices are deployed, recording television programs off-air becomes far
more efficient. In the digital domain, the incoming transport stream must be de-multiplexed,
and packets belonging to the program stream we are interested in are stored on a hard disk. The broadcaster already optimally compresses these streams. Some storage benefits could be gained by transcoding them. Note that we certainly cannot add any data back to the video that has already been removed at source. Future development work on PVR devices will focus on storing and managing content that has been delivered digitally. This is within the reach of software-based product designs and does not require massive amounts of expensive hardware.
There are complex rights issues attached to home digital-recording technology, and it is in constant evolution.


Mobile PVR Solutions

Another interesting product was demonstrated by Pace at the International Broadcasting
Convention (IBC) in 2004. It was a handheld PVR designed to record material being broadcast
using the DVB-H mobile-TV standard. Coupling this with the H.264 codec and a working DRM solution brings us very close to a system that could be rolled out very soon. Provided rights issues and the content-delivery technology can be developed at the front end, products such as the PVR2GO shown in Figure 2-10 could be very successful.



The Future
The technology that enables PVR devices is getting cheaper, and the coding techniques are pushing the storage capacity (measured in hours) ever upward. Nevertheless, not every household will want to own a PVR. In addition, the higher end of the functionality spectrum may only ever be available to users with a lot of disposable income. Some of the basic functionality may just be built into a TV set. As TV receivers are gradually replaced with the new technology, they ship with video compression and local storage already built in. Pause and rewind of live video, for instance, is very likely to be built into TV sets, and for it to be manufactured cheaply enough, the functionality will be implemented in just a few integrated circuits and will then be as ubiquitous as the Teletext decoders found in European TV sets.
Broadband connectivity is penetrating the marketplace very rapidly—perhaps not quite as fast as DVD players did, but in quite large numbers all the same. A critical threshold is reached when video codecs are good enough to deliver satisfactory video at a bit rate that is equal to or less than what is available on a Broadband connection. Indeed, H.264 encoding packed into MPEG-4 multimedia containers coupled with a PVR storage facility and a fast, low-contention broadband link is a potential fourth TV platform that offers solutions to many of the problems that cannot be easily solved on the satellite-, terrestrial- and cable-based digital-TV platforms. MPEG-4 interactive multimedia packages could be delivered alongside the existing digital-TV content in an MPEG-2 transport stream. Indeed, the standards body has made special provision to allow this delivery mechanism, and MPEG-4 itself does not need to standardize a transport stream because there is already one available.