DU SOL B.Com 3rd Year E-Commerce Notes Chapter 9 Multimedia and E-Commerce
What is the Concept & Role of Multimedia
The term multimedia as it sounds is a combination of two or more media to make a presentable computer output. It is usually associated with the use of sound, video and text to make Games, Presentations, Software, etc. In other words multimedia is simultaneous use of data from different sources. These sources in multimedia are known as media elements.
With growing and very fast changing information technology, Multimedia has become a crucial part of computer world. Its importance has been realized in almost all walks of life, may it be education, cinema, advertising, fashion and what not. Throughout the 1960s, 1970s and 1980s, computers have been restricted to dealing with two main types of data – words and numbers.
But the cutting edge of information technology introduced faster system capable of handling graphics, audio, animation and video. And the entire world was taken aback by the power of multimedia. Multimedia is nothing but the processing and presentation of information in a more structured and understandable manner using more than one media such as text, graphics, animation, audio and video.
Thus multimedia products can be an academic presentation, game or corporate presentation, information kiosk, fashion-designing etc. Multimedia systems are those computer platforms and software tools that support the interactive uses of text, graphics, animation, audio, or motion video. In other words a computer capable of handling text, graphics, audio, animation and video is called multimedia computer.
If the sequence and timing of these media elements can be contolled by the user, then one can name it as Interactive Multimedia.
Describe the various media elements used in Multimedia Technologies.
Different media elements used are –
Inclusion of textual information in multimedia is the basic step towards development of multimedia software. Text can be of any type, may be a word, a single line, or a paragraph. The textual data for multimedia can be developed using any text editor. However to give special effects, one needs graphics software, which supports this kind of job. Even one can use any of the most popular word processing software to create textual data for inclusion in multimedia. The text can have different type, size, color, and style to suit the professional requirement of the multimedia software.
Another interesting element in multimedia is graphics. As a matter of fact, taking into consideration the human nature, a subject is more k explained with some sort of pictorial/graphical representation, rather than as a large chunk of text.
This also helps to develop a clean multimedia screen, whereas use of large amount of text in a screen make it dull in presentation.Unlike text, which uses a universal ASCII format, graphics does not have a ‘ single agreed format. They have different format to suit different requirement.
Most commonly used format for graphics is .BMP or bitmap pictures. The size of a ‘ graphics depends on the resolution it is using. A computer image * uses pixel or dots on the screen to form itself. And these dots or pixel, when combined with number of colors and other aspects are called resolution.
Resolution of an image or graphics is basically the pixel density and number of colors it uses. And the size of the image depends on its resolution. A standard VGA (Virtual Graphics Arrays) screen can display a screen resolution of640 × 480 = 307200 pixel. And a Super VGA screen can display up-to 1024 × 768 = 786432 pixel on the screen. While developing multimedia graphics one should always keep in mind the image resolution and number of colors to be used, as this has a direct relation with the image size. If the image size is bigger, it takes more time to load and also requires higher memory for processing and larger disk-space for storage. However, different graphics formats are available which take less space and are faster to load into the memory.
There are several graphics packages available to develop excellent images v and also to compress them so that they take lesser disk-space but use higher resolution and more colours. Packages like Adobe Photoshop, Adobe Illustrator, PaintShop Pro., etc., are excellent graphics packages. There are Graphics gallery available in CD’s (Compact Disk) with readymade images ‘ to suit almost every requirement. These images can directly be incorporated i into multimedia development.
Moving images have an overpowering effect on’the human peripheral vision. Followings are few points for its popularity.
Showing continuity in transitions:
Animation is a set of static state, related to each other with transition. When something has two or more states, then changes between states will be much easier for users to understand if the transitions are animated instead of being instantaneous.
An animated transition allows the user to track the mapping between different subparts through die perceptual system instead of having to involve the cognitive system to deduce the mappings.
Indicating dimensionality in transitions:
Sometimes opposite animated transitions can be used to indicate movement back and forth along some navigational dimension.
One example used in several user interfaces is the use of zooming to indicate that a new object is “grown” from a previous one (e.g., a detailed view or property list opened by clicking on an icon) or that an object is closed or minimized to a smaller representation. Zooming out from the small object to the enlargement is a navigational dimension and zooming in again as the enlargement is closed down is the opposite direction along that dimension.
Illustrating change over time:
Since, animation is a time-varying display, it provides a one-to-one mapping to phenomena that change over time. For example, deforestation of the rain forest can be illustrated by showing a map with an animation of die covered area changing over time.
Multiplexing the display:
Animation can be used to show multiple information objects in the same space. A typical example is client-side image maps with explanations that pop up as the user moves the user moves over the various hypertext anchors.
Enriching graphical representations:
Some types of information are easier to visualize with movement than with still pictures. Consider, for example, how to visualize the tool used to remove pixels in a graphics application.
Visualizing three-dimensional structures:
As you know the computer screen is two dimensional. Hence users can never get a full understanding of a three-dimensional structure by a single illustration, no matter how well designed.
Animation can be used to emphasize the three-dimensional nature of objects and make it easier for users to visualize their spatial structure. The animation need not necessarily spin the object in a full circle – just slowly; turning it back and forth a little will often be sufficient. The movement should be slow to allow the user to focus on the structure of the object. You can also move three-dimensional objects, but often it is better to determine in advance how best to animate a movement that provides optimal understanding of the object.
This pre-determined animation can then be activated by simply placing the cursor over the object. On the other hand user-cohtroHed movements requires the user to understand how to manipulate the object (which is inherently difficult with a two-dimensional control device like the mouse used with most computers – to be honest, 3D is never going to make it big time in user interfaces until we get a true 3D control device).
Finally, there are a few cases where the ability of animation to dominate the user’s visual awareness can be turned to an advantage in the interface. If the goal is to draw the user’s attention to a single element out of several or to alert the user to updated information then an animated headline will do the trick.
Animated text should be drawn by a one-time animation (e.g., text sliding in from the right, growing from the first character, or smoothly becoming larger) and never by a continuous animation since moving text is more difficult to read than static text. The user should be drawn to the new text by the initial animation and then left in peace to read the text without further distraction.One of the excellent software available to create animation is Animator Pro.
This provides tools to create impressive animation for multimedia ’ development.
Beside animation there is one more media element which is known as video. With latest technology it is possible to include video impact on clips of any type into any multimedia creation, be it corporate presentation,fashion design, entertainment games, etc. The video clips may contain some dialogues or sound effects and moving, pictures.
These video clips can be combined with the audio, text and graphic for
7 multimedia presentation. Incorporation of video in a multimedia package is ‘ more important and complicated than other media elements. One can procure video clips horn various sources such as existing video films or even can go for an outdoor video shooting. All die video available are in analog format.
To make it usable by computer, the video clips are needed to be converted into computer understandable format, i.e., digital format. Both combinations of software and hardware make it possible to convert the analog video clips into digital format. This alone does not help, as the digitized video clips take lots of hard disk space to store, depending on the frame rate used for digitization. The computer reads a particular video clip as a series of still pictures called frames. Thus video clip is made of a series of separate frames where each frame slightly different from the previous one.
The computer reads each frame as a bitmap image. Generally there are 15 to 25 frames per second so that the movement is smooth. If we take less frames than this, the movement of the images will not be smooth. To cut down the space there are several modem technologies in windows environment.Essentially these technologies compress the video image so that lesser space is required.
Describe what do you understand by Digital video?
Digital Video is popularly known as DVD. It can be used to create innovative, cutting edge, video-disc with different sorts of data including director’s cuts, etc. However, latest video compression software makes it possible to compress the digitized video clips to its maximum. In the process,it takes lesser storage space. One more advantage of using digital video is, the quality of video will not deteriorate from copy to copy as the digital video signal is made up of digital code and not electrical signal. Caution should be taken while digitizing the video from analog source to avoid frame droppings and distortion.
A good quality video source should be used for digitization.
Currently, video is good for –
- promoting television shows, films, or other non-computer media that traditionally have used trailers in their advertising.
- giving users an impression of a speaker’s personality.
- showing things that move. For example a clip from a motion picture. . Product demos of physical products are also well suited for video.
Audio has a greater role to play in multimedia development. It gives life to the static state of multimedia. Incorporation of audio is one of the most important features of multimedia, which enhance the multimedia usability to its full potential. There are several types of sound, which can be used in multimedia. They are human voices, instrumental notes, natural sound and many more. AH these can be used in any combination as long they give some meaning to their inclusion in multimedia.
- There are many ways in which these sounds can be incorporate into the computer. For example;
- Using microphone, human voice can directly be recorded in a computer.
- Pre-recorded cassettes can be used to record the sound into computer.
- Instrumental sound can also be played directly from a musical instrument for recording into the computer.
The sound transmitted from these sources is of analog nature. To enable the computer to process’ this sound, they need to be digitized. As all of us know that sound is a repeated pattern of pressure in the air and a microphone converts a sound wave into an electrical wave. The clarity of sound, the final output depends entirely on the shape and frequency of the sound wave.
When digitized (recording into computer), the error in sound can be drastically reduced. Audio need to be converted into digital format to produce digitized audio in order to use them in multimedia. And these digitized sounds again can be re-converted into analog form so that the user can hear them though the speakers. Musical Instrument Digitization Interface or MIDI provides a protocol or a set of rules, using which the details of a musical note from an instrument is communicated to the computer.
But MIDI data is not digitized sound. It is directly recorded into the computer from musical instruments, whereas digitized audio is created from the analog sound. The quality of MIDI data depends upon the quality of musical instrument and the sound system. A MIDI file is basically a list command to produce the sound.
For example, pressing of a guitar key can be represented as a computer command. When the MIDI device processes this command, the result will be the sound from the guitar. MIDI files occupy lesser space as compared to the digitized audio and they are editable also. The main benefit of audio is that it provides an exclusive channel that is separate from that of the display.
Speech can be used to offer commentary or help without obscuring information on the screen. Audio can also be used to provide a sense of place or mood. Mood setting audio should employ very quiet background sounds in ‘ order not to compete with the main information for the user’s attention. Music is probably the most obvious use of sound. Whenever you need to inform the user about a certain work of music, it makes much more sense to simply play it than to show the notes or to try to describe it in words.
Multimedia hardware requirements:
For producing multimedia you need hardware software and creativity. In this section we will discuss the multimedia equipment required in a personal computer (PC) so that multimedia can be produced,
(a) Central Processing Unit. Central Processing Unit (CPU) is an essential part ill any computer. It is considered as the brain of computer, where processing and synchronization of all activities takes place. The efficiency of a computer is judged by the speed of the CPU in processing of data.
For a multimedia computer a Pentium processor is preferred because of higher efficiency. However, the CPU of multimedia computer should be at least 486 with math coprocessor. The Pentium processor is one step up the evolutionary chain from the 486 series processor and Pentium Pro is one step above the Pentium. And the speed of IC processor is measured in megahertz. It defines the number of commands the computer can perform in a second.
The faster the speed, the faster the CPU and the faster the computer will be able to perform. As the multimedia involves more than one medial element, including high-resolution graphics, high quality motion video, and one need a faster processor for better performance. In today’s scenario, a Pentium processor with MMX technology and a speed of. 166 to 200 MHz (Megahertz) is an ideal processor for multimedia. In addition to the processor one will need a minimum 16 MB RAM to run WINDOWS to edit large images or video clips. But a 32 or 64 MB RAM enhances the capacity of multimedia computer.
(b) Monitor. As you know that monitor is used to see the computer output. Generally, it displays 25 rows and 80 columns of text. The text or graphics in a monitor is created as a result of an arrangement of tiny dots, called pixels. Resolution is the amount of details the monitor can render. Resolution is defined in terms of horizontal and vertical pixel (picture elements) displayed on the screen.
The greater the number of pixels, better visualization of the image. Like any other computer device, monitor requires a source of input. The signals that monitor gets from the processor are routed though a graphics card. But there are computers available where this card is in-built into the motherboard. This card is also called the graphics adapter or display adapter. This card controls individual pixels or tiny points on a screen that make up image.
There are several types of display adapter available. But the most popular one is Super Virtual Graphics Arrays (SVGA) card and it suits the multimedia requirement. The advantage of having a SVGA card is that the quality of graphics and pictures is better. Now the PCs, which are coming to the market, are fitted with SVGA graphics card. That allows images of up to 1024 × 768 pixels to be displayed in up to 16 millions of colours. What determines the maximum resolution and colour depth is the amount of memory on the display adapters. Often you can select the amount of memory required such as 512KB, 1MB, 2MB, 4MB, etc.
However, standard multimedia requirement is a 2MB of display memory (or Video RAM). But one must keep in mind that this increases the speed of the computer, also it allows displaying more colours and more resolutions. One can easily calculate the minimum amount of memory required for display adapter as
(Max. Horizontal Resolution × Max. Vertical Resolution, Colour Depths, in Bits )/8192 = The minimum video (or display) memory required in KB.
For example, if SVGA resolution (800 × 600) with 65,436 colours (with colour depth of 16) you will need
(800 × 600 × 16)/8192 = 937.5 KB, i.e., approximately 1 MB of display memory.
Another consideration should be the refresh rate, i.e., the number of times the images is painted on the screen per second. More the refresh rate, better the image formation. Often a minimum of 70-72Mhz is used to reduce eye fatigue. As a matter of feet higher resolution requires higher refresh rates to prevent screen flickers
(c) Video Grabbing Card:
As we have already discussed, we need to convert the analog video digital signal for processing in a computer. Normal computer will not be able to do it alone. It requires special equipment called video grabbing card and software to this conversion process. This card translates the analog signal it receives from conventional sources such as a VCR or a video camera, and converts them into digital format.
The software available with it will capture this digital signal and store them into computer file. It also helps to compress the digitized video so that it takes lesser disk space as compared to a non-compressed digitized video. This card is fitted into a free slot on the motherboard inside the computer and gets connected to an outside source such as TV, VCR or a video camera with the help of a cable.
This card receives both video and audio signal from the outside source and conversion from analog to digital signal takes place. This process of conversion is known as sampling. This process converts the analog signal to digital data streams so that the signal can be stored in binary data format of 0’s and 1 ’s.
This digital data stream is then compressed using the video capturing software and stores them in the hard disk as a file. This file is then used for incorporation into multimedia. This digitized file can also be edited according to the requirements using various editing software such as Adobe Premiere. A number of digitizer or video grabbing cards are available in the market. However, one from Intel called Intel Smart Video Recorder III does a very good job of capturing and compressing video.
(d) Sound Card.
Today’s computers are capable of creating the professional multimedia needs. Not only you can use computer to compose your own music, but it can also be used for recognition of speech and synthesis. It can even read back the entire document for you. But before all this happens, we need to convert the conventional sound signal to computer understandable digital signals.
This is done using a special component added to the system called sound card. This is installed into a free slot on the computer motherboard. As in the case of video grabber card, sound card will take the sound input from outside source (such as human voice, pre-recorded sounds, natural sounds etc.) and convert them into digital sound signal of of 0’s and 1 ’s. The recording software alongwith the sound card will store this digitized sound stream in a file.
This file can later be used with multimedia software. One can even edit the digitized sound file and add special sound effects into it. Most popular sound card is from Creative Systems such as Sound Blaster-16, AWE32, etc. AWE32 sound card supports 16 channels, 32 voice and 128 instruments and 10 drums sound reproduction. It also has CD-ROM interface.
(e) CD-ROM Drive. CD- iOM is a magnetic disk of 4.7 inches diameter and it can contain data up to 680 Megabytes. It has become a standard by itself basically for its massive storage capacity, faster data transfer rate. To access CD-ROM a very special drive is required and it is known as CD-ROM drive.
Let us look into the term ROM that stands for ‘Read Only Memory’. It means the material contained in it can be read (as many times, as you like) but the content cannot be changed. As multimedia involves high resolution of graphics, high quality video and sound, it requires large amount of storage space and at the same time requires a media, which can support faster data transfer. CD-ROM solves this problem by satisfying both requirements. Similar to the hard disk drive, the CD-ROM drive has certain specification which will help to decide which drive suit best to your multimedia requirement.
(i) Transfer Rate:
Transfer rate is basically the amount of data the drive is capable of transferring at a sustained rate from the CD to the CPU. This is measured in KB per second. For example, 1 × drive is capable of transferring 150KB of data from the CD to the CPU. In other terms 1x CD drive will sustain a transfer rate of 150KB/sec. where x stands for 150 KB. This is the base measurement and all higher rates are multiple of this number, ×. Latest CD-ROM drive available is of 64x, that means it is capable of sustaining a data transfer rate of64 × 150 = 9600 KB = 9.38MB per second from the CD to the CPU.
(ii) Average Seek time:
The amount of time lapses between request and its delivery is known as average seek time. The lower the value, better the result and time is measured in milliseconds. A good access time is 150rqs. Recently computer technology has made tremendous progress. You can now have CDs which can‘write many, read many times’.
This means you can write your files in the blank CD through a laser beam. The written material can be read many times and they can even be erased and re-written again. Basically this re-writable CD’s can be used as simple floppy disk.
Multimedia requires high quality of images, graphics to be used. And it takes lot of time creating them. However there are ready-made sources such as real life photographs, books, arts, etc., available from where one easily digitized the required pictures.
To convert these photographs to digital format, one need a small piece of equipment called scanner attached to the computer. A scanner is a piece of computer hardware that sends a beam of light across a picture or document and records it. It captures images from various sources such as photograph, poster, magazine, book, and similar sources.
These pictures then can be displayed and edited on a computer. The captured or scanned pictures can be stored in various formats like;
File Format Explanation:
PICT – A widely used format compatible with most Macintosh
JPEG – Joint Photographic Experts Group – a format that compresses files and lets you choose compression versus quality
TIFF – Tagged Image File Format – a widely used format compatible with both Macintosh and Windows systems
Windows BMP – A format commonly used on MS-DOS and MS- Windows computers
GIF – Graphics Interchange Format – a format used on the Internet, GIF supports only 256 colours or grays
Scanners are available in various shapes and size, like hand held, feed-in and flatbed types. They are also for scanning black and-white only, or color. Some of the reputed vendors of scanner are Epson, Hewlett-Packard, Microtek and Relisys.
As the‘name suggests, touchscreen is used where the user is required to touch the surface of the screen or monitor. It is basically a monitor that allows user to interact with computer by touching the display screen. This uses beam of infrared light that are projected across the screen surface.
Interrupting the beam generates an electronic signal identifying the location of the screen. And the associated software interprets the signal and performs the required action. For example, touching the screen twice in quick succession works as double clicking of the mouse. Imagine how useful this will be for visually handicapped people who can identify things by touching surface.
Touchscreen is normally not used for development of multimedia, it is rather used for multimedia presentation arena like trade show, information kiosk, etc.
Uses of Multimedia:
Placing the media in a perspective within the instructional process is an important role of the teacher and library professional. Following are the possible areas of application of multimedia –
- Can be used as reinforcement
- Can be used to clarify or symbolize a concept
- Creates the positive attitude of individuals toward what they are learning and the learning process itself can be enhanced.
- The content of a topic can be more carefully selected and organized
- The teaching and learning can be more interesting and interactive
- The delivery of instruction can be more standardized.
- The length Of time needed for instruction can be reduced
- The instruction can be provided when and where desired or necessary.
Describe Desktop Video Conferencing and Marketing.
Perhaps one of the least well known and explored applications of Internet technology is that of desktop video conferencing, whereby individuals and groups can communicate visually and aurally in real time via the medium of their Net – connected computer, with often very little additional hardware and software being required. Those who have explored this technology have very often been most exited by its sense of immediacy and indeed intimacy that can be established, once participants have overcome any initial stage fright that might seem attendant to such a medium that is far more upfront than the anonymity and perhaps facelessness of other internet technologies and interactions.
How to connect?
It is possible to connect to just one other party to have a direct or one-to-one video conference, providing at least one of the two parties knows the IP (Internet Protocol) number of the other party. To have a multi-party conference, a reflector is required.
A reflector is a computer that enables many participants to connect to it and it then “reflects” the video and audio sent to it, to all connected participants. Reflector software is available for Unix, Windows and Macintosh computers. A reflector is usually “well-connected” to the Internet. That is, there is not much point in setting up a reflector at the end of a dialup modem connection, as there would be insufficient bandwidth to carryout incoming and outgoing video. If your company’s travel budget has been slashed or if the recent terrorist attack have made employees reluctant to fly, you may be considering videoconferencing as an alternative to face-to-face meetings.
Before you jump in, here’s a comprehensive guide to enterprise-level videoconferencing that covers everything from bandwidth requirements to equipment options to deployment costs.
Client devices. Currently there are three distinct categories of clients defined primarily by usage.
- Desktop. Desktop videoconferencing clients are assigned to a single user. They cost between $600 and $3,000 for a hardware-based system and up to $150 for a software-only client. Connectivity is over IP.
- Small group. Either an appliance that costs between $3,000 and $ 12,000 or a PC-based system that costs between $6,000 and $ 14,000. Small-group videoconferencing systems are relatively easy to configure and use. They run over ISDN or IP.
- Large group/boardroom: Provide the highest-quality video, but also come with the highest price tag, with systems starting at $ 10,000. They also run over ISDN or IP.
Videoconferencing can leverage the existing public telephone network, a private IP network or the Internet. The target bandwidth for interactive video communications is in the 300K to 400K bit/sec per stream range. This includes audio and video as well as control signaling.
The H.323 protocol does not require that two or more endpoints in a session send the same data rate they receive. A low-powered endpoint may only be able to encode at a rate of 100K bit/sec but, because decoding is less processor-intensive, it could decode a 300K bit/sec videostream. Nevertheless, in videoconferencing, bandwidth is assumed to be symmetrical. In full-duplex networks such as ISDN, Ethernet, ATM and time division multiplexed networks, capacity is expressed as bandwidth in one direction though equal bandwidth is available for traffic in the opposite direction.
You need to estimate the number of simultaneous sessions your network needs to support, and figure out if your network has bandwidth end-to-end. A T-I offers 1.5M bit/sec in each direction and would be ample bandwidth for two 512K bit/sec or three 384K bit/sec videoconferences, depending on the amount of simultaneous traffic on the network.
Also, make sure that you have 10/100 switched Ethernet throughout the LAN segments where videoconferencing traffic is expected. Multipoint conference bandwidth (with which three or more locations can see and hear one another) is calculated separately from point-to-point sessions. Multipoint can be conducted in either IP or ISDN environments, and some conferencing units will support both network types. Multipoint conferencing products may be software-based or accelerated with special hardware, and their configuration can produce different bandwidth consumption patterns as well as different user experiences.
For example, when an endpoint is used to host a multipoint conference, the maximum bandwidth for any single participant is the bandwidth allocated to that host divided by the number of locations participating. When you need to have more than four locations on a call at the same time, network-based products are recommended. If you decide that your IP network can’t handle the additional traffic associated with live video sessions in a merged or converged network deployment, your options are to rely on circuit switched networks or to deploy additional IP bandwidth capacity.
The WAN connection:
Approximately 80% of the group videoconferencing Units installed today interface directly with ISDN. Less than 5% use ATM, and the remainder are on an IP net.
ISDN is recommended when –
- You are planning to connect with people in locations outside your company
- The locations are in Europe, where ISDN is easily available and broadband IP remains at least 50% more expensive than in the U.S
- Your IP network capacity is lacking and you do not expect to place outbound calls more than two or three hours per month.
If you use ISDN for transport and you want to add centralized user administration or system management, you can still install an Ethernet connection to each device and a management software package such as Polycom’s Global Management System or Vcon’s MXM on a server in the company’s datacenter.
The limitations of ISDN (Basic Rate Interface or Primary Rate Interface) include—
- Availability not widespread in the U.S.
- Difficulty configuring and managing once ordered.
- Subject to service interruptions (single point of failure).
- It has distance-driven and metered costs (long-distance).
- The infrastructure supports only one telephony-like service: multipoint conferencing.
Video calls on ISDN cannot be put on hold, cannot be forwarded (when no one answers, when the line is in use or for any other reason), and there has never been a video mail box on ISDN. Recording one side of an ISDN videoconference is possible using an analog VCR provided the appropriate interfaces exist on the local client system.
The IP option:
Using proprietary technologies Or H.323 standard- complaint endpoints, an IP network designed only for data can be modified to support business-quality videoconferencing services. Where bandwidth is available, the IT manager would need to add and adjust a few components to provide a complete solution, or outsource the management to a third party such as WireOne’s GlowPoint service or Sprint’s IP videoconferencing services. If the deployment is expected to have more than five or six system, a centralized user and network administration console such as Polycom’s Global Management System. RADVision’s H.323 gatekeeper or Veon’s MXM is recommended. Some companies are going a step further and designing an enterprise conferencing portal using technologies such as FVC’ Click-to-Meet.
While these packages differ in their features and functions, they are designed to perform address book management (an important issue when clients are set tip behind a firewall and use network address translation), set performance metrics on a per-device or user basis, and can even reduce the risk of application data traffic degradation due to excessive bandwidth consumption.
Implementing Quality of Service (QoS) in a LAN helps to protect the integrity of service-sensitive applications without forklift upgrades. Most of the leading network equipment vendors already support common QoS standards, such as RSVP; they only need to be enabled by the network administrator. You should also find out what your backbone provider uses for its QoS.
If the protocol or scheme chosen for QoS in the local loop is not the same as that implemented in the backbone, the enterprise network needs to put QoS translation software in place for QoS requests to operate end-to-end during a videoconference. Even when QoS protocols are in place, you may need additional network tuning to ensure that the video applications don’t crowd out data applications. To avoid this, network managers should segment and manage bandwidth on each switch and router to limit the total, prioritized video traffic. After provisioning appropriate bandwidth and QoS, other challenges remain. One of the biggest obstacles is getting realtime video traffic through firewalls.
Since H.323-compliant applications use dynamically allocated sockets for audio, video and data channels, a firewall must be able to allow H.323 traffic through on an intelligent basis. The firewall must be either H.323-enabled with an H.323 proxy or able to snoop on the control channel to determine which dynamic sockets are in use for H.323 sessions, and to allow traffic through only as long as the control channel is active.
Merging and emerging services:
Since the very essence of videoconferencing is communications and most legacy Systems are not on IP networks, the user is likely to encounter a situation where protocols need translation across different networks.
When a videoconference needs to span both the ISDN and IP infrastructures, gateways are necessary. RAD Vision is the leading manufacturer of videoconferencing gateways and offers a variety of form factors and densities to meet diverse network requirements.
Some companies have to share limited resources and want a reservation system to permit room or niultipoint control unit (MCU) scheduling. Endpoint and MCU vendors offer some scheduling tools that may meet your company’s needs. Third-party products, such as Collaborative Systems’ Orchestra MagicSoft’s VC Wizard and Global Scheduling Solutions’ Global Schedule, have unique features. When the videoconferencing basics are in place for group conferencing, you might consider a number of optimizations.
For example, by enabling IP multicast and using intelligent clients, a network can efficiently support multiway meetings without adding an MCU. If using IP multicasting to achieve a multipoint scenario, each client sends only one stream of packets to an IP multicast group and all participating machines receive the packets. In this scenario, bandwidth consumption is lower than when an endpoint or MCU sends out copies of the same packet to each of the receivers.
Another second-generation feature found in products such as Polycom’s ViewStation FX is integration of videoconferencing with streaming media systems, enabling the broadcast of a videoconference from a coder/ decoder to many remote -ewers via a streaming media server or to archive a videoconference on a streaming media server for later review.
Although an exception to the rule today, large financial services companies that, have integrated videoconferencing into their corporate cultures are beginning to deploy desktop videoconferencing capabilities.
With Universal Serial Bus interfaces, setting up a videocamera takes only a few minutes, in contrast with earlier desktop products that required opening the PC and installing a card. Low-cost Webcams put all the computational load from compressing video and audio on the host computer. Option hardware- accelerated cameras designed specifically for videoconferencing, such as Polycom ViaVideo or Vcon Vigo, produce the best results.
How much does it cost?
Depending on the number of endpoints, the type of client of network videoconferencing can cost as little as the price of a Webcam ($ 100) per seat to more than $ 15,000 per conference room.
To budget a videoconferencing deployment, break down the fixed acquisition costs from the recurring and usage-based costs. The exact fixed costs are going to depend on the number of systems and the features your users need. In general, systems provisioned for ISDN will also support IP, but IP only systems tend to cost several hundred dollars to $ 1 ,000 less than ISDN systems because they have fewer components. Management software is sold according to site licenses from $250 per license to $40,000 or more for unlimited licenses. Complete enterprise conferencing portal environments suitable for large companies can exceed $100,000 per installation, depending on hardware and software components.
Another factor is the cost of installing the last mile. Basic Run Rate SDN installation runs about $225 in most regions of the SBC territory, while other regions tend to be higher. The cost of installing a T-l depends on the distance between your facility and the nearest central office.
Recurring costs are composed of the monthly cost of network access, network usage costs and, potentially, the salary for one or more technicians managing network provisioning, installations, room or conferencing system reservations, technical support and user training. The largest variable in this equation is the network usage costs.
ISDN usage charges vary but can be estimated for individual customers (one site) at 5 cents per minute per B channel. A 384K. bit/sec videoconference will consume 6 B channels at a cost of approximately 30 cents per minute, or $ 18 per hour. Companies that negotiate their telecommunications rates with carriers for voice and video usually receive discounts on this rate.
ISPs also charge for capacity, though not by the minute. To calculate the costs of IP backbone services, multiply the data rate by the time. A 384K bit/ sec call for one hour will generate nearly 1.4G bits of bandwidth. On a VPN the network usage costs are already fixed and the company will incur no additional charges.
Going with a managed service provider can be cost-effective for some regions and some companies. GlowPoint’s Web site offers a calculator, and users can plug in the number of hours of usage per location and the average cost of ISDN service to obtain a cost estimate.
The costs of deploying videoconferencing are as variable as the networks and depend on the number of installations, features and choice of network. Cost of ownership runs about $15 per hour for a midsize enterprise. It’s safe to predict that costs will continue to fall as more people get on the bandwagon. And, in the face of rising travel costs, getting a rapid return on your investment in videoconferencing is easier now than ever before.
Write a short note on various Broadband networks and related concepts: ISDN, ATM, Cell relay.
Various concepts can be defined as—
ISDN : ISDN is a mature technology, which allows a telephone company to configure a telephone line to transmit digital data at high speeds. With standard analog telephone lines (sometimes referred to as POTS), the fastest modem connections to computer networks operate at 28.8 kilobytes per second.
Using ISDN technology, a telephone can connect to a network at 128k per second. The higher speeds allow users to transmit data much faster, and to use telephone networks to transmit multimedia applications, including low grade video transmissions. Interest in ISDN technology has expanded greatly in the past year, as the use of the Internet’s World Wide Web (WWW) has become more popular. The highspeed ISDN connections give users the “bandwidth” to download graphics and sound files much faster, making the WWW much more pleasant to use. The idea of ISDN has existed for many years, but was not actually exploited for most of that time, hence the jibe that it stands for It Still Does Nothing.
In fact, it stands for Integrated Services Digital Network, and is based on the following idea. Today’s trunk telephone network and modern telephone exchanges are fully digital. The analogue speech arriving at the exchange from the subscriber’s line is rendered into digital form, passed through the exchanges and trunk lines in digital form, and only converted back to analogue form for sending to the other subscriber’s phone line. Network signaling (within the network itself) is also fully digital.
However the telephone network is used nowadays for other things: faxes Computer communications, etc. Many of those things are naturally digital, and it’s perverse to convert them to analogue merely to get the data to the telephone exchange (where it will be converted back to digital form: the result of this perversity is quite a reduction in the data carrying capability of the telephone connection). The idea of ISDN is to extend the digital part of the network out over the subscriber’s line, and doing any analogue to digital conversion at the subscriber’s premises, at the same time giving the subscriber an access also to the digital side, which, as we will see, also gives •> higher data throughput rate.
Provided that the subscriber is not too far from the exchange, and the cables are in reasonable condition the ordinary copper pair is quite capable of carrying the basic rate ISDN service which consists of two 64kbits/sec digital channels called “B”(bearer) channels and a lower-speed “D” channel used for signalling (i.e for setting up and tearing down calls and similar purposes). Broadly speaking, such a basic rate ISDN line (called ISDN2 by BT, the terms “BRI” and “2B+D” are also used) can give access to several devices at the subscriber premises (selected by, say, the last digit of the phone number), and there can be at most two simultaneous cabs in progress.
In some applications, a call is made on both 13 channels at once, giving in effect a I28kbit/sec channel (at double the cost of a single call, of course). There are many other technical advantages with ISDN. To take just one example, the short call set up and tear down times make it feasible to close a call (for example just before the next charge increment is due) when no data is being transmitted, and transparently open it again when more data is to be sent, thus saving on call charges. ISDN lines are being heavily promoted in all the countries nowadays, and are available for little more than the cost of an “ordinary.” line.
Equipment is also available relatively cheaply (e.g PC cards). BT, on the other hand charges an arm and a leg in installation charges and rental, and relatively little equipment has been approved for UK use, most of it very expensive compared with conventional (analogue) telephones. The result is that very few individual UK customers have installed ISDN, and, correspondingly, very few BT staff understand what it is, so you have to go out of your way to get started. Most reports about cable operators who also offer telephone services have been very disappointing.
Considering that these companies are offering state-of-the-art connections using fibre optic cables to the nearest distribution point, they would be ideally positioned to provide ISDN service to their customers, yet few of them seem to even know what ISDN is! Their telephone service seems, all too often, to be an unloved auxiliary to their main business of providing for-payment television channels.
In the U S A, the EFF (Electronic Frontier Foundation) campaigned strongly in 1993 for the full exploitation of ISDN as a part of an information infrastructure. At that time, the availability of ISDN in the USA was very patchy, being easily available in some areas, and to all intents unavailable in others.
There is no need to discuss here in detail here because much’ of this is second hand, but basically it needs all the participants to have ISDN connections, and they in effect make “phone” calls to each other, or to a conference control centre (technically, something called an MCU), to set up their conference.
Obviously, such a conference is closed against intruders, and any number of simultaneous conferences can be in progress without interfering with each other in any way (up to the total capacity of the phone network!), both of which are unlike the Mbone. The cost of such international telephone calls is quite high, but a lot less than the cost of international travel and hotels, quite apart from the cost in terms of staff time. The minimum that is usually contemplated for videoconferencing is a call over two channels, corresponding to both of the channels of an ISDN2 (BRI) line, giving 128kb/s at European ISDN standard. For better results, the usual choice is three pairs (6 channels), of course this is only feasible if you have something better than ISDN2, normally this would be a PRI (Primary Rate Interface) connection.
A primary rate interface can support up to 30 B channels (in European usage), although one might subscribe to a smaller number in practice. Inland, and to some countries, the cost is the same as for the corresponding number of telephone calls, but to other countries it is necessary to use the special number prefix applicable to data calls (000 instead of 00), which attracts a higher call rate in recognition of the fact that the telco’s normal speech-compression techniques cannot be used on data calls.
ATM. Asynchronous Transfer Mode is a network technology operating at the Data Link layer and, to some extent, in the Physical Layer and Network Layer. ATM is the transmission system for Broadband ISDN or B-ISDN. The . goal of ATM is to allow a variety of network services to be provided by the same network architecture.
One physical connection can support many ATM virtual circuits. Each ^ virtual circuit sends as many packets as necessary to transfer the data necessary for that circuit. A video circuit could be sending 100 Mbps on the same physical connection that a 64 Kbps phone circuit was operating. ATM is said to support , variable bandwidth circuits.
Although each cell is always sent at the same rate on the physical networks by sending a varying number of cells every second, a circuit can send vary the number of bits sent. Data in an ATM network is transmitted in fixed sized packets called cells. Each cell is 53 bytes and contains 48 bytes of data (or higher-level protocol headers). ATM is circuit oriented. Before a computer on an ATM network can send any data, it must first establish a virtual circuit connecting it to the destination.
Virtual circuits are point-to-point and bi-directional. (It should be noted that this prohibits the concept of broadcasting.) Virtual circuits are established within virtual paths. A series of virtual paths can be connected to form a virtual circuit. Many circuits can be Supported by one virtual path. Many virtual paths can exist over a single transmission path. ATM switches perform the routing in an ATM network.
The Virtual Circuit Identifier and the Virtual Path Identifier in the ATM cell header indicate to the ATM switch how a packet should be routed. Virtual paths can be permanent or switched. A permanent virtual path is established by the network manager and exists for a long period of time, often for the life w of the network.
Virtual circuits can be routed over the existing permanent virtual circuits. It is also possible to have switches that route virtual paths. Because permanent virtual paths are simpler, most ATM networks use permanent virtual circuits.
SONET/SDH. The Synchronous Optical NETwork or the Synchronous Digital Hierarch are physical layer protocols frequently used to transport ATM cells. SONET is also used to transport almost all long distance telephone calls in the United States. SONET defines a 9 by 90 byte frame with the first 3 columns containing control information. Frame is sent row wise.
Three bytes of control are sent followed by 87 bytes of data then the next 3 bytes of control and the next 87 bytes of data until the whole frame is sent.
ATM Adaptation Layers. ATM provides several different services.
- AAL type 1 Constant Bit Rate, R/T Good for phones
- AAL type 2VBR, R/T compressed video
- AAL type 3/4VBR, not R/T
- AAL type 5(simple and efficient AAL)
- Available Bit Rate (tells application about bandwidth available)
- Unspecified Bit Rate (take what you get)
The ATM Adaptation Layers correspond closely to the ATM classes
Higher level protocols call the Adaptation Layer to send a block of data. The Adaptation Layer adds its own header and trailer to the data. The large block is then divided into 48 byte pieces, which are sent in ATM cells. At the receiving end, the data from the cells is reassembled into a single block of data. This process is calls Segmentation And Reassembly (SAR).
Quality of Service:
ATM networks provide the user with a guaranteed quality of service. When a user establishes a virtual connection under ATM, they can optionally specify –
- Peak Cell Rate
- Sustained Cell Rate (burstiness is PCR/SCR)
- Cell Delay Variance Tolerance (jitter)
- Cell Loss Ratio
The network makes sure that the necessary resources are available to provide this level of service before permitting the new connection. A real¬time service, such as video transport, may require a low level of jitter. Jitter is the variance in how long it takes to deliver a packet. For streaming real-time services, it is important that the transmission delay not vary significantly.
IP over ATM:
The Internet protocol can run over ATM, but a few adjustments have to be made to account for ATM’s inability to broadcast. ATM systems running the Internet protocol must have an ARP server and a broadcast server.
The ARP server converts from IP addresses to ATM addresses. If an application must broadcast a message, it can send it to the broadcast server that will send it to all hosts on the network. Each host must have a permanent virtual circuit to these servers.
A data transmission technology based on transmitting data in relatively small, fixed-size packets or cells. Each cell contains only basic path information that allows switching devices to route the cell quickly.
Cell relay systems can reliably carry live video and audio because cells of fixed size arrive in a more predictable way than systems with packets or frames of varying size. Asynchronous Transfer Mode (ATM) is the cell relay standard set by the CCITT organization. ATM uses a cell of 53 bytes. A statistically multiplexed interface protocol for packet switched data
communications that uses fixed-length packets, i.e., cells, to transport data. Cell relay transmission rates usually are between 56 kb/s and 1.544 Mb/s, i.e., the data rate of a DS1 signal.
Cell relay protocols (a) have neither flow control nor error correction capability, (b) are information-content independent, and (c) correspond only to layers one and two of the ISO Open Systems Interconnection—Reference Model. Cell relay systems enclose variable-length user packets in fixed-length packets, i.e., cells, that add addressing and verification information.
Frame length is fixed in hardware, based on time delay and user packet- length considerations. One user data message may be segmented over many cells. Cell relay is an implementation of fast packet technology that is used in (a) connection-oriented broadband integrated services digital networks (B- ISDN) and (b) connectionless IEEE 802.6, switched multi-megabit data service (SMDS). Cell relay is used for time-sensitive traffic such as voice and video.