seminar vcp

Upload: zalak-rakholiya

Post on 07-Apr-2018




0 download


  • 8/6/2019 Seminar Vcp



    Seminar Report on


    Prepared by


    Guide: Ms. AVANI PATEL

    Department of Electronics Engineering

    Sarvajanik College of Engineering & Technology

    Dr. R K Desai Marg, Athwa Lines, Surat.

  • 8/6/2019 Seminar Vcp


    A Seminar Report on


    Prepared by : Rosy J. Singh

    Roll No. : 51

    Class : B.E.IV (Electronics Engineering) 8th Semester

    Year : 2001-2002

    Guided by : Ms. Avani Patel

    Department of Electronics Engineering.

    Sarvajanik College of Engineering & TechnologyDr R.K. Desai Road,

    Athwalines, Surat - 395001,India.

  • 8/6/2019 Seminar Vcp



    Many have contributed to the successful completion of this report. I

    would like to place on record my grateful thanks to each of them.

    Ms. Avani Patel, my guide, has helped me immensely throughout

    the research. She has also helped in reading, modifying and correcting

    the report.

    I am also indebted to the staff and members of Electronicsdepartment of SCET for their assistance.

  • 8/6/2019 Seminar Vcp



    Cell phones have been the wave of our future for some time now.

    More than 100 million cell phones are in use in the World today, and it will rise to

    more than 200 million by 2003 according to International Data Corp.'s (IDC) mostrecent estimates. Every year they get better and smaller, but what else can the

    cellular phone companies do in order to improve or upgrade it?

    Video cell phones have been introduced to be the next generation (third

    generation) of cell phone history. Samsung Electronics and Geo Interactive Media

    Group announced that they have developed the first video cell phone successfully in

    November of 2000.

    The seminar aims to analyze how the video cell phone works, and what

    are the requirements for example MPEG4, bandwidth, etc. Basically,it deals with

    the following three requirementsthe wireless network needed,the hardware and

    finally the video code required. It also studies the first working cellphone

    produced by Samsung and GEO using the A2 chip.

  • 8/6/2019 Seminar Vcp




    2.The evolution of mobile communication

    3.3G technology

    3G Features

    3G Talking Points

    3GReady or not?

    3GVideo on Mobile

    Video on mobilesome innovative applications

    First video cell phone4. Wireless networks

    Overview of some wireless systems


    Improvements needed for the wireless network

    CDMA 2000-Delivering on 3G

    CDMA Deployments


    A2 ASIC Chip

    Working of A2

    Features of A26.Video Code

    Video code


    MPEG Architecture

    MPEG-2 V/s MPEG-4Conclusion









  • 8/6/2019 Seminar Vcp



    More than a decade ago, Martin Cooper invented a product that has

    become ubiquitous in modern lifethe CELLPHONE. However, initially it

    was huge and heavy. The connection and reception of the network was also

    bad. At that moment, cell phones were not a necessity and they were mostly

    owned by wealthy people, business people, and important authoritative

    figures. As time went by, technology improved, and the cell phone began

    getting smaller, cheaper and better. Nowadays, almost everyone has a cellphone. The functionalities of the cell phones increased also. It not only works

    as a phone, it has clock, alarm, games, calendar, notes, reminder features,

    online capabilities etc. This stage of cell phones was the second generation

    cell phone.

    Video has emerged as the latest obsession in the high-tech world The

    third generation cell phone have the video function added to the it. Therefore,

    people can not only send text messages, they will also be able to send a video

    message to their friends, families, co-worker, classmate etc.

    This report briefly describes what the requirements of the video cell

    phone are. There are three main points that people should be working on for

    the video cell phone: the wireless network requirement, the hardware needed,

    and the kind of video code that should be used. It is divided into various

    chapters starting from the evolution of mobile communication,then covering

    the various aspects of 3G phones, and finally dealing with the details of the

    requirements of video cellphones.

  • 8/6/2019 Seminar Vcp





    The mobile communications industry has evolved in three stages:

    Three generations of mobile phones have emerged so far, each successive

    generation more reliable and flexible than the last:

    Analog: You could only easily use analogue cellular to make voice calls, and

    typically only in any one country.

    Digital mobile phone systems added fax, data and messaging capabilities as

    well as voice telephone service in many countries.

    Multimedia services add high speed data transfer to mobile devices, allowing

    new video, audio and other applications through mobile phones- allowing

    music and television and the Internet to be accessed through a mobile


    With each new generation of technology, the services which can de deployed

    on them becomes more and more wide ranging and truly limited only by



  • 8/6/2019 Seminar Vcp


    During the first and second generations different regions of the world pursued

    different mobile phone standards, but are converging to a common standard for

    mobile multimedia called Third Generation (3G) that is based on CDMA technology.Europe pursued NMT and TACS for analog and GSM for digital, North America

    pursued AMPS for analog and a mix of TDMA, CDMA and GSM for digital. 3G

    will bring these incompatible standards together

    The International Telecommunications Union(ITU) took the initiative to

    unify the disparate standards employed by various countries.The initiative was in the

    form of International Mobile Telecommunications 2000,also called as 3G.

    IMT 2000 provides a framework for worldwide wireless access by linking

    diverse terrestial and satellite based networks,mobile communication technologies

    and systems for fixed wireless access.The goal-to fulfil the dream of anywhere

    ,anytime communication.

    Following on the heals of analog and digital technology, the Third

    Generation will be digital mobile multimedia offering broadband mobile

    communications with voice, video, graphics, audio and other information. This

    transition is shown in Table below:

  • 8/6/2019 Seminar Vcp



    tionType Time Description

    1st Analog 1980s Voice centric, multiple standards (NMT,TACS etc.)

    2nd Digital 1990sVoice centric, multiple standards (GSM,







    Introduction of new higher speed data

    services to bridge

    the gap between the second and Third


    including services such as General Packet

    Radio Service

    (GPRS) and Enhanced Data Rates for Global

    Evolution (EDGE)






    Voice and data centric, single standard with




  • 8/6/2019 Seminar Vcp


    3G Features

    3G has the following features:

    Packet everywhere:

    With Third Generation (3G), the information is split into separate but related

    packets before being transmitted and reassembled at the receiving end.

    Packet switched data formats are much more common than their circuit switched

    counterparts. Other examples of packet-based data standards include TCP/IP, X.25,

    Frame Relay and Asynchronous Transfer Mode (ATM). As such, whilst packet

    switching is new to the GSM world, it is well established elsewhere. In the mobile

    world, CDPD (Cellular Digital Packet Data), PDCP (Personal Digital Cellular

    Packet), General Packet Radio Service (GPRS) and wireless X.25 technologies have

    been in operation for several years. X.25 is the international public access packet

    radio data network standard.

    Internet Everywhere

    The World Wide Web is becoming the primary communications interface-

    people access the Internet for entertainment and information collection There is a

    trend away from storing information locally in specific software packages on PCs to

    remotely on the Internet.Hence, web browsing is a very important application for

    packet data.

    High Speed:

    Speeds of up to 2 Megabits per second (Mbps) are achievable with Third

    Generation (3G). The data transmission rates will depend upon the environment the

    call is being made in- it is only indoors and in stationary environments that these

    types of data rates will be available. For high mobility, data rates of 144 kbps are

  • 8/6/2019 Seminar Vcp


    expected to be available- this is only about three times the speed of todays fixed

    telecoms modems.

    New ApplicationsBetter Applications:

    Third Generation (3G) facilitates several new applications that have not

    previously been readily available over mobile networks due to the limitations in data

    transmission speeds. These applications range from Web Browsing to file transfer to

    Home Automation- the ability to remotely access and control in-house appliances

    and machines. Because of the bandwidth increase, these applications will be even

    more easily available with 3G than they were previously with interim technologies

    such as GPRS.

    Service Access:

    To use Third Generation (3G), users specifically need:

    A mobile phone or terminal that supports Third Generation (3G)

    A subscription to a mobile telephone network that supports Third Generation


    Use of Third Generation (3G) must be enabled for that user.Automatic access

    to the 3G may be allowed by some mobile network operators, others will

    charge a monthly subscription and require a specific opt-in to use the service

    as they do with other nonvoice mobile services

    Knowledge of how to send and/ or receive Third Generation (3G)

    information using their specific model of mobile phone, including software

    and hardware configuration (this creates a customer service requirement)

    A destination to send or receive information through Third Generation (3G).

    From day one, Third Generation (3G) users can access any web page or other

    Internet applications- providing an immediate critical mass of users.

    These user requirements are not expected to change much for the meaningful use

    of 3G.

  • 8/6/2019 Seminar Vcp


    3G Talking Points

    The telecommunications world is changing as the trends of media

    convergence, industry consolidation, Internet and IP technologies and mobile

    communications collide into one. Significant change will be bought about by this

    rapid evolution in technology, with Third Generation mobile Internet technology a

    radical departure from that that came before in the first and even the second

    generations of mobile technology. Some of the changes include:

    People will look at their mobile phone as much as they hold it to their ear. As

    such, 3G will be less safe than previous generations- because television and

    other multimedia services tend to attract attention to themselves- instead of

    hands-free kits, we will need eyes-free kits!

    Data (non-voice) uses of 3G will be as important as and very different from

    the traditional voice business

    Mobile communications will be similar in its capability to fixed

    communications, such that many people will only have a mobile phone

    The mobile phone will be used as an integral part of the majority of peoples

    lives- it will not be an added accessory but a core part of how they conduct

    their daily lives. The mobile phone will become akin to a remote control or

    magic wand that lets people do what they want when they want

    3G technologyready or not?

    Third-generation wireless technology finally seems to be arriving. South

    Korea and Japan have it and even in the United States, there are signs of progress. So

    where, as this rush to robustness picks up steam, is the granddaddy of all high-end

    mobile applications -- streaming multimedia?

  • 8/6/2019 Seminar Vcp


    Asia and Europe remain ahead in wireless adoption across the board, and the

    slickest new services will be old hat in Tokyo by the time American consumers get a

    hold of them.

    In Korea, there is tremendous experimentation with all types of multimedia.Japan has seen a huge hit with messaging of still images and video is right around

    the corner. Japanese service provider J-Phone unveiled a service earlier this year that

    allows digital images, taken with digital cameras or cameras built into some models

    of phone, to be shared among friends.

    Streaming media is supposed to be the killer application for 3G.The

    deployments of CDMA2000 1X by Sprint, Verizon and Canada's Bell Mobility can

    start to do the trick. It has an advertised data rate of up to 144 kbps,More likely that

    rate will actually be 40 to 60 kbps, and video is clearly viable in those ranges.

    3G--Video on mobile-Who wants it?

    Who's going to want to watch video on a mobile device? it prompted a survey

    of streaming video advocates and observers.

    Is video on mobile more likely to capture the hearts of suited executives --

    ever alert for the next new thing -- or the imaginations of the unwashed masses who

    just want to have fun. The answers are different depending on which quadrant of the

    globe you're standing on, of course.

    Mobile video is already off and running in Japan, with NTT DoCoMo's

    announcing a new platform that provides "one-to-many" live video distribution to 3G

    (third generation) handsets.

    It delivers streaming media using MPEG-4 technology -- is the first of its

    kind. It was developed with U.S.-basedPacketVideo.
  • 8/6/2019 Seminar Vcp


    It is not expected that people will watch 10 or 15 or a half hour of a

    television program on their mobile device, because you can get that on television and

    it's not really that interesting [on a mobile], but what is expected is to break out the

    15- and 20-second jokes, the pratfalls or the explosions, and put them in an

    environment where people can send them to friends.

    There are certain sports where audio just won't deliver the full experience.

    Also, music video has substantial value to consumers -- there are business models

    where people will pay for videos.

    News is another really good example. A video capable cell phone would

    allow users to see Internet free stock tickers, sports scores and eventually each other

    though two-way video conferencing--- no typing, logging in or passwords required.

    These are just a few of the uses that a video cellphone can be put to.

    Video on mobileInnovative Applications

    The text-messaging craze has swept Europe and Asia.Handset makers and

    carriers are beginning to talk about MMS, the multimedia messaging service that isthe natural outgrowth from text-only SMS, or short message service. Popular early

    applications are more likely to be users' own creations. when people are walking

    around town, they can subscribe to certain topics,So when you're walking around

    New York, videos associated with the exact area where you happen to be standing or

    walking at the time will pop up. For instance, you're doing a tour of Washington,

    D.C., and you go by the Reflecting Pond, and up comes the Martin Luther King ['I

    have a dream'] speech, which you can see right where you are. Sort of like a video

    time capsule.

    Specific lifestyle applications: Some unusual examples-

  • 8/6/2019 Seminar Vcp


    Imagine putting the yoga teacher in your pocket, putting the professional chef -- you

    know, 'How do I make this meal?' -- Well, a housewife doesn't have a computer in

    her kitchen, but if she has her cell phone, being able to see 10 step-by-step 15-second

    [segments] of how to cook a gourmet meal -- that's something she can use and will


    The First Video Cellphone

    StarTrek style science fiction is to become a mainstream reality for the many

    millions of people who are leading increasingly cellular-centric professional and

    personal lives, with the introduction of world's first ever MPEG4 streaming video

    cell-phone based on GEO's EmblazeTM A2 video ASIC chip by Samsung Korea. The

    phone represents a breakthrough in cellular technology enabling the playback of

    streaming video content over GSM and CDMA based IS-95B and IS-95C networks.

    The new phone allows users to view rich media content directly on their

    mobile phones by simply pressing the video function key and browsing through the

    list of available content, in a similar fashion to TV channels

    The phone enables varying viewing experiences (video size, speed and image

    quality) according to the bandwidth provided by the network over 2G, 2.5G and 3G


    As bandwidth increases, the user can enjoy larger video size with a smoother

    image and better image quality.

    The phone is currently compatible with existing CDMA systems at speeds of

    9.6kbps (IS-95A), 64kbps (IS-95B) and CDMA2000 1X that is capable of reaching a

    data transfer rate of up to 144Kbps.
  • 8/6/2019 Seminar Vcp


    The EmblazeTM A2 video ASIC chip has created a revolution by enabling

    video playback over standard devices using standard battery technology, memory

    and processors what was formally deemed as impossible by world engineers.

    The device will allow people to receive and play video from the Internet and fromemail.

    What has been holding back other companies from creating similar products

    is the fact that "third generation," or 3G, networks are still years away. With its much

    higher bandwidth, 3G systems are expected to enable many new wireless

    applications, including full-motion video. But Samsung and Geo assert that current

    wireless networks can support their video phone.


    Among the three requirements, it is widely believed that the technology of

    wireless network has to improve first. This is because people believe that video cell

    phones are not possible in the second generation network. Second generation

    network, 2G, basically includes Code Division Multiple Access (CDMA), Time

    Division Multiple Access (TDMA), and Global System for Mobile communication


    Overview of some wireless systems:


    1986:NMT-900 1983:AMPS


  • 8/6/2019 Seminar Vcp


    NMT: Nordic Mobile Communication

    Developed by Denmark,Norway,Finland & Sweden.Uses 450 & 900 Mhz


    AMTS: Advanced Mobile Phone System

    It is an anlog system.Works at 800 Mhz.

    PDC: Personal Digital Cellular

    Used mainly in Japan.

    UMTS: Uniform Mobile Telecommunications system

    Is the European proposal for IMT-2000 prepared by ETSI.It represents an

    evolution from 2nd generation GSM system to the 3rd generation system.


    1994:DCS 1800




  • 8/6/2019 Seminar Vcp



    TDMA: is a method of utilizing radio spectrum. It provides each user access

    to the entire frequency channel for a brief period, during which the user

    transmits data. The users' frequency channel is shared with other users who

    have time slots allocated at different times.

    GSM :is an international, non-proprietary system that is constantly evolving.

    GSM uses digital technology and TDMA methods to transmit the data. Voice

    is digitally encoded via a unique encoder, which emulates the characteristics

    of human speech.

    CDMA: is a "spread spectrum" technology, which means that it spreads the

    information contained in a particular signal of interest over a much greater

    bandwidth than the original signal.

    Spread spectrum involves spreading the bandwidth needed to transmit the

    data,which causes an increase in resistance to narrowband interferance.All

    narrowband signals are spread into broadband using same frequency range.all

    senders use the same frequency band.

    To separate different channels,CDM is used.The spredaing is achieved by

    using a special code.The techniques used areDirect Sequence Signal

    Spreading(DSSS) and Frequency Hopping Spectrum Spreading.Each channel is

    allocated a code which the receiver has to apply to recover the code.Without

    knowing the code, the signal cannot be recovered and behaves as background noise.

  • 8/6/2019 Seminar Vcp


    Frequency are scarce resource around the world.Spread spectrum allows an

    overlay of new techniques at exactly same frequency at which the current

    narrowband signals operate.This is done for the US Mobile phones systems.While

    the frequency around 850 Mhz is already in use for TDM & FDM(AMPS & IS-

    54),the CDM(IS-95) is still possible.

    Improvements Needed For The Wireless Network

    In the second generation network, the transmit rate is slow. In 1993, the first

    standard CDMA IS95-A had been completed. Six years later, in July of 1999,

    CDMA2000 had been standardized. A few months later, the speed of CDMA had

    been increase to 64 kilobits per second (kbps). And by April of 2000, CDMA2000

    1X had been introduced. The speed of CDMA had been increased to 144kbps.

    However, in November of 2000, the wireless networks could only support 10 to 20

    kbps, but streaming video would require much higher bandwidth than this.

    Therefore, if one wanted to transfer a video data, the speed of the network would not

    be enough.

    In 2001, one could say that the network was between the 2G and 3G network.

    The 3G network is a broadband, packet-based transmission of text, digitized voice,

    and video. It is also a multimedia at data rates up to and possibly higher than 2

    megabits per second (Mbps). This offers a consistent set of services to mobile

    computer and phone users no matter where they are located in the world.

    Why does the network speed need to improve in order to have a video cell

    phone? As discussed before, streaming video needs higher bandwidth to transfer the

    video data. If the network is not improved, the wireless network cannot provide

    quality videos or images even if a video cell phone exists in the current network

    status. As a result, 3G network is being developed now.

    CDMA2000: Delivering on 3G

  • 8/6/2019 Seminar Vcp


    CDMA2000 is an ITU-approved, IMT-2000 (3G) standard. IMT,previously called

    Future Land Mobile Telecommunication system) that allows for terminal and user

    mobility supporting the idea of universal personal telecommunication(UPT).


    The first 3G networks to be commercially deployed were launched in Korea in

    October 2000 using CDMA2000 technology.

  • 8/6/2019 Seminar Vcp


    The CDMA2000 evolution path is flexible and future-proof...



    Data up

    to 14.4



    Data up

    to 115




    in voice


    Up to




    data on a




    First 3G


    for any






    ed, very




    (Phase 1)

    Up to


    * packet

    data on a





    d voice

    and data


    2); up to

    4.8 Mbps

    CDMA2000 is a family of technologies allowing seamless evolution from

    CDMA2000 1X through CDMA2000 1xEV-DO (Evolution - Data Only, offering

    data speeds of up to 2.4 Mbps on a separate 1.25 MHz carrier) and CDMA2000

  • 8/6/2019 Seminar Vcp


    1xEV-DV (Evolution Data/Voice, which integrates voice and data on the same

    1.25 MHz carrier and offers data speeds of up to 4.8 Mbps).


    A2 ASIC Chip

    The SPH-X2000 cell phone(BY SAMSUNG & GEO) uses the GEO Emblaze A2

    video ASIC chip to deliver video over the wireless system. The A2 ASIC chip is

    integrated into the cell phones to allow it to play multimedia (Figure ). The A2 ASIC

    chip is based on the ARM platform as a high-performance, fully programmable core

    processor, and includes on-chip memory. The interfaces also include the LCD

    controller, Voice codec, Audio DAC and flash memory. This chip can deliver and

    receive the MPEG4 video code over the second generation network, which are the

    CDMA, the TDMA and the GSM. The SPH-X2000 cell phone is an amazing product

    because many people think that video cell phones cannot function with the second

    generation network. Fortunately, it is compatible with the existing CDMA systems at

    a speed of 9.6 kbps, 64 kbps and CDMA2000 1X. The A2 video ASIC chip candeliver 15 frames per second with the second generation network. The cell phone is

    also able to deliver 30 frames per second depending on the speed of the network. In

    other words, the phone enables different viewing features. Some examples are

    through video size, speed and image quality according to the different bandwidths

    provided by different networks. These bandwidths vary between 2G, 2.5G and 3G.

    The phone can deliver and receive larger video sizes; when the bandwidth of the

    network is increased, there will be smoother images and better image quality.

  • 8/6/2019 Seminar Vcp


    Wireless Multimedia Terminal Based on A2

  • 8/6/2019 Seminar Vcp


    The A2 ASIC chip--Working:

    The A2 ASIC chip works like this: it receives a compressed video and audio

    from a baseband chip, and then decodes the video and audio. After, the baseband

    chip sends the decoded video and audio to the LCD controller, it is then sent to the

    voice codec / audio DAC. The external Flash memory is used for storing the A2

    ASIC software. Other than that, the A2 ASIC includes all the memory it needs for

    processing and depicting data. This eliminates the need for an external memory and

    saves both power and space that are crucial in mobile applications.

    The A2 ASIC chip is usually controlled by a baseband chip, which is the

    host. A generic host bus and a programmable host protocol enable the A2 ASIC chip

    to be used as a multi media co-processor for a wide variety of baseband devices.

    Both the baseband chip and the A2 ASIC chip drive the voice codec. In order to save

    external glue, the multiplex between these two is implemented in the A2 ASIC chip.

    Thus, the A2 ASIC chip gets the voice output from the baseband and passes it to the

    voice codec.

    An on-chip power management unit generates internal frequencies out from

    an external 27.00 MHz clock. There is a specific application running on the power

    management unit which can be controlled by the host and programmable. When

    running at full performance, 100 MHz, the A2 ASIC chip consumes ~700 mW. A

    shutdown pin puts the A2 ASIC chip into a shut down mode which consumes less

    than 1 mA. In the shut down mode, the only thing that is active is the passing

    through of the voice data.

  • 8/6/2019 Seminar Vcp


    The heart of the device is the ARM920 , an industry standard RISC. All the

    media and network protocols are implemented by software to insure maximize

    flexibility. We can see in figure 2 that a 160 KB SRAM is used to store all the data

    that is needed for the video, audio and system protocols. The A2 ASIC chip includes

    ARM system peripherals such as interrupt control unit (ICU), two timers, real time

    clock (RTC) and a general purpose IO (GPIO). The glueless interface to the LCD

    controller, Voice codec, flash memory and host completes the device.

    Features of the A2 Chip

    ASIC Characteristics

    Package: 144 FlexBGA

    0.25 CMOS

    2.5 volts core

    3 volts I/O

    Active power: 700 mW

    Shutdown mode current - less than 1 mA

    Video Decoding

    MPEG-4 Simple profile Level 1

    Bit rate: Up to 144 kbps

    Image size: Up to QCIF (176x144 pixels)

    Frame rate: Up to 30 frames/second

    Voice Decoding


    File Format & Transport Protocol


  • 8/6/2019 Seminar Vcp


    ESF (Emblaze streaming format)

    LCD Controller Port

    YUV 4:2:2 color space

    QCIF resolution

    Voice Codec Interface

    PCM style. Configurable

    Host Inteface

    8 bits generic (Intel style) bus

  • 8/6/2019 Seminar Vcp



  • 8/6/2019 Seminar Vcp



    Consumer Electronics has always been driven by various standards.Thesestandards have driven the changes that have occurred in the product ranges in the

    audio-visual format.MPEG is a data compression technology that reduces the size of

    the original information many times before encoding it. MPEG has been

    international standards since 1993.Because movies contain both images and sound,

    MPEG can compress both audio and video. But video takes more bandwidth and also

    contains more redundancy than audio.

    Video Code:

    The last concern of developing a video cell phone is the video code. The

    video cell phone needs a high compression with a low bitrate code. Fortunately, this

    is already available to use. The Moving Picture Experts Group (MPEG) is a working

    group of ISO/IEC. They are in charge of the development of international standards

    for compression, decompression, processing, and coded representation of moving

    pictures, audio and their combination. MPEG-4 Systems is a subgroup under MPEG.The MPEG-4 group develops the tools to support the coded representation of the

    combination of streamed elementary audiovisual information. The information has

    many different forms: natural or synthetic, audio or visual, 2D and 3D objects within

    the context of content-based access for digital storage media, digital audiovisual

    communication and other applications.


    MPEG 4

  • 8/6/2019 Seminar Vcp


    This is the next generation MPEG code. It goes beyond compression

    methods. Instead of treating data as continuous streams,MPEG-4 actually

    deals with audio-video objects(AVO) that are manipulated and encoded

    independently, allowing for increased interaction with the code data and

    improved flexibility in editing. It supports the entire gamut of audio-video

    modes and their respective transmission speeds.

    MPEG 4 has been specifically designed to deal with high

    compression ratios for distribution of video and audio without too much loss

    of quality over low bandwidth networks. It offers a wide range of bit rates.It

    provides the fastest encoding with the highest video quality.

    MPEG-4 Systems Version 1 contains several different sets of tools to

    reconstruct a synchronous interactive and streamed audiovisual scene. There

    are five tools:

    1. Systems Decoder Model (SDM),

    2. Identification and association of scenes and streams (Object and

    Elementary Streams Descriptors),

    3. Scene description (Binary Format for Scenes),

    4. Synchronization of streams (Sync. Layer),

    5. Efficient multiplexing of streams (FlexMux),

    6. Object Content Information (OCI) and,

    7. Syntactic Description Language.

    Systems Decoder Model (SDM)

  • 8/6/2019 Seminar Vcp


    The systems decoder model includes two models: the timing model and

    the buffer model. The model observes the behavior of an MPEG-4 terminal.

    When reconstructing the audiovisual information that comprises the session, the

    SDM will also allow a sender to predict how the receiver will behave in terms of

    buffer management and synchronization.

    Identification and Association of Scene and Streams (Object Descriptor)

    The intention of the object descriptor framework is to recognize, illustrate

    and combine elementary streams with the various components of an

    audiovisual scene. An object descriptor is a collection of one or more

    elementary stream descriptors. These descriptors provide configuration and

    other information for the streams that relate to a single object (media object

    or scene description). Elementary stream descriptors include information

    about the source of the stream data. The information comes in the form of a

    unique numeric identifier (the Elementary Stream ID) or a URL pointing to a

    remote source for the stream.

    Object Descriptors group several elementary streams into objects, and

    describe their properties, such as encoding formats, configuration information

    for the decoder, quality of service requirements, content information (author,

    title, rating, etc.) and intellectual property identification. It can also describe

    dependencies between streams for scalability, alternative representations, etc.

    Scene Description (Binary Format for Scenes)

  • 8/6/2019 Seminar Vcp


    The scene description addresses the organization of audiovisual objects in a

    scene. It is done through the spatial and temporal positioning terms. This

    information allows the composition and rendering of individual audiovisual

    objects. This specification, however, does not mandate particular composition,

    rendering algorithms or architectures since they are implementation-dependent.

    Synchronization of Streams (Sync. Layer)

    The elementary streams are the basic concept for any data source. Elementary

    streams are conveyed as SL-packetized (Sync Layer-packetized) streams at the

    stream multiplex interface. Furthermore, this packet representation provides

    timing and synchronization information, and also, fragmentation and random

    access information. The SL extracts this timing information to enable

    synchronized decoding and, afterward, composition of the Elementary Stream


    It adds a header to each access unit of an elementary stream, which

    includes time stamps, reference to a clock elementary stream, and identification

    of key frames (RandomAccessPoint). This is similar to the task of RTP in IP

    networks. However, SL does not contain a payload type (like RTP), and does not

    contain the Elementary Stream ID (ES_ID). In addition, an SL packet does not

    contain an indication of its length, so it must be framed by a lower-level protocol

    such as FlexMux or RTP.

    Multiplexing of Elementary Streams (FlexMux)

    The TransMux layer is a generic abstraction of the transport protocol stacks of

    existing delivery layers. These stacks may be used to transmit and store content

    complying with the MPEG-4 Standard. The functionality of FlexMux is where

  • 8/6/2019 Seminar Vcp


    the desired data transport protocol stacks (TransMux Layer)transport facility

    does not fully address, a simple multiplexing tool (FlexMux) is defined that

    provides low delay and low overhead.

    This groups elementary streams according to common attributes, such as QOSrequirements. This is a very simple multiplexing protocol, but also very low


    OCI Data Stream

    An object content information (OCI) stream carries descriptive information about

    audiovisual objects. The stream is prearranged in a sequence of small,

    synchronized entities called, events, which contain information descriptors. The

    main content descriptors are: content classification descriptors, keyword

    descriptors, rating descriptors, language descriptors, textual descriptors, and

    descriptors about the creation of the content. These streams can be linked to other

    media objects with the mechanisms provided by the object descriptor.

    Syntactic Description Language

    A syntactic description language is used to define the syntax of the various

    bitstream components identified by the normative parts of the MPEG-4 Standard.

    This language allows the specification of the mapping of the various parameters

    in a binary format. It also shows how the binary format should be placed in a

    serialized bitstream.

  • 8/6/2019 Seminar Vcp


    MPEG Architecture

  • 8/6/2019 Seminar Vcp


    Compression Layer: Includes elementary (raw) media streams (audio, video


    Transmux Layer: This is the actual transport protocol, such as RTP/UDP,

    MPEG-2, etc. MPEG-4 does not define its own transport protocol, butassumes the application relies on an existing transport protocol.

    The FlexMux Layer is optional, but the Synchronization Layer is always present.

    BIFS - The Binary Interchange Format for Scenes (BIFS) describes how

    MPEG-4 Objects are placed in "Scenes". This part of MPEG-4 Systems is

    obviously not required for small-screen, single-media-object applications.

    The AVO data are basically conveyed in one or more elementary

    streams.These streams contain elements such as maximum bit rate,bit error

    rate ,quality as well as other parameters,includeing stream type information

    to determinr the required decoder sources and the precision for encoding

    timing information.The total data encoded in each of these streams finally

    makes up the entire scene.

    Depending on the nature of the scene,the details in each frame of the

    scene alter to an extent. But as the particular object moves through its

    surrounding, many times the background and environment reamin same

    .Sometimes the angle might change ,but this is not noticeable as the scene

    changes too fast for the eye to catch it.

    MPEG 2 v/s MPEG 4

  • 8/6/2019 Seminar Vcp


    It is seen that the total bandwidth required by MPEG-2 at an average is a

    massive 6500Kbits/sec at a resolution of 720*576 whereas at the same

    resolution MPEG-4 uses only 880 Kbits/sec.This goes out to show that MPEG-4

    provides relatively high quality audio-video data transmission over various

    broadcast mediums.

    MPEG 4 definitely offers better compression algorithms tha MPEG 1 and

    MPEG 2, but all this comes with a huge performance demand as encoding of an

    MPEG 4 sequence requires some pretty high-end computing power.


  • 8/6/2019 Seminar Vcp


    The telecommunications world is changing as the trends of media convergence,

    industry consolidation, Internet and IP technologies and mobile communications

    collide into one. Significant change will be bought about by this rapid evolution in

    technology. People will look at their mobile phone as much as they hold it to their

    ear. The mobile phone will be used as an integral part of the majority of peoples

    lives- it will not be an added accessory but a core part of how they conduct their

    daily lives. The mobile phone will become akin to a remote control or magic wand

    that lets people do what they want and when they want so much so that from saying

    You have got mail ,we might actually say You have got Video!.

    The existing phone allows users to view rich media content directly on their

    mobile phones by simply pressing the video function key and browsing through the

    list of available content, in a similar fashion to TV channels.

    However, it's pretty unlikely in the next couple years that people will watch

    full-length movies or even music videos as the cost to end-users is prohibitive.A

    video cellphone handset costs anywhere between $600 to $700.But , as bandwidth

    increases, the user can enjoy larger video size with a smoother image and better

    image quality.


  • 8/6/2019 Seminar Vcp



    Tanenbaum,Andrew S.,Computer Networks,Prentice-Hall India.

    Digit Magazine
  • 8/6/2019 Seminar Vcp


    Schiller ,Jochen H. ,Mobile Communications,Pearson EducationLimited,India.