database trends – past, present, future

59
1 Database Trends – Database Trends – Past, present, Past, present, Future Future Presented by Presented by Surendar Reddy B Surendar Reddy B

Upload: jason-sutton

Post on 11-Aug-2015

32 views

Category:

Documents


2 download

DESCRIPTION

Database Trends – Past, present, Future

TRANSCRIPT

Page 1: Database Trends – Past, present, Future

11

Database Trends – Past, Database Trends – Past, present, Futurepresent, Future

Presented byPresented by

Surendar Reddy BSurendar Reddy B

Page 2: Database Trends – Past, present, Future

22

When did mainframe computers come into being?

The origin of mainframe computers dates back to the 1950s, if not earlier. In those days, mainframe computers were not just the largest computers; they were the only computers and few businesses could afford them.

Mainframe development occurred in a series of generations starting in the 1950s. First generation systems, such as the IBM 705 in 1954 and the IBM 1401 in 1959, were a far cry from the enormously powerful machines that were to follow, but they clearly had characteristics of mainframe computers. These computers were sold as business machines and served then—as now—as the central data repository in a corporation's data processing center.

In the 1960s, the course of computing history changed dramatically when mainframe manufacturers began to standardize the hardware and software they offered to customers. The introduction of the IBM System/360™ (or S/360™) in 1964 signaled the start of the third generation: the first general purpose computers. Earlier systems such as the 1401 were dedicated as either commercial or scientific computers. The revolutionary S/360 could perform both types of computing, as long as the customer, a software company, or a consultant provided the programs to do so. In fact, the name S/360 refers to the architecture’s wide scope: 360 degrees to cover the entire circle of possible uses.

Page 3: Database Trends – Past, present, Future

33

When did mainframe computers come into being ? --- continued

The S/360 was also the first of these computers to use microcode to implement many of its machine instructions, as opposed to having all of its machine instructions hard-wired into its circuitry. Microcode (or firmware, as it is sometimes called) consists of stored microinstructions, not available to users, that provide a functional layer between hardware and software. The advantage of microcoding is flexibility, where any correction or new function can be implemented by just changing the existing microcode, rather than replacing the computer.

With standardized mainframe computers to run their workloads, customers could, in turn, write business applications that didn’t need specialized hardware or software. Moreover, customers were free to upgrade to newer and more powerful processors without concern for compatibility problems with their existing applications. The first wave of customer business applications were mostly written in Assembler, COBOL, FORTRAN, or PL/1, and a substantial number of these older programs are still in use today.

In the decades since the 1960s, mainframe computers have steadily grown to achieve enormous processing capabilities. The New Mainframe has an unrivaled ability to serve end users by the tens of thousands, manage petabytes of data, and reconfigure hardware and software resources to accommodate changes in workload—all from a single point of control.

Page 4: Database Trends – Past, present, Future

44

Data storageData storage

Data was stored on punch cards and Data was stored on punch cards and paper reels.paper reels.

Data was then stored on transistors.Data was then stored on transistors.Data was then stored on magnetic Data was then stored on magnetic

tapes and discs.tapes and discs.Data was then stored on storage Data was then stored on storage

Optical discs.Optical discs.

Page 5: Database Trends – Past, present, Future

55

Data base evolutionData base evolution

Punch cardsPunch cards Flat files – Sequential files or text files.Flat files – Sequential files or text files. Indexed sequential files. Indexed sequential files. Cluster - Virtual storage access method. Cluster - Virtual storage access method. Hierarchical database.Hierarchical database. Network Database.Network Database. Relational database management system.Relational database management system. Object relational database management Object relational database management

system.system. Object oriented database management Object oriented database management

system.system.

Page 6: Database Trends – Past, present, Future

66

Next-Generation Database Systems

In the late 1960s and early 1970s, there were two mainstream approaches to constructing DBMSs. The first approach was based on the hierarchical data model, typified by IMS (Information Management System) from IBM. The second approach was based on the network data model, which attempted to create a database standard and resolve some of the difficulties of the hierarchical model, such as its inability to represent complex relationship effectively. Together, these approaches represented the first generation of DBMSs.

Page 7: Database Trends – Past, present, Future

77

In 1970, Codd produced his paper on the relational data model. This paper was very timely and addressed the disadvantages of the former approaches, in particular their lack of data independence. Many experimental relational DBMSs were implemented thereafter, with the first commercial products appearing in the late 1970s and early 1980s. Now there are over a hundred relational DBMSs for both mainframe and PC environments. Relational DBMSs are referred to as second-generation DBMSs.

Page 8: Database Trends – Past, present, Future

88

However, as we discussed RDBMSs have their failings, particularly their limited modeling capabilities. There has been much research attempting to address this problem. In 1976, Chen presented the Entity-Relationship model that is now a widely accepted technique for database design. In 1979, Codd himself attempted to address some of the failings in his original work with an extended version of the relational model called RM/T (Codd, 1979), and more recently RM/V2 (Codd, 1990). The attempts to provide a data model that represents the 'real world' more closely have been loosely classified as Semantic data modeling.

Page 9: Database Trends – Past, present, Future

99

In response to the increasing complexity of database applications, two 'new' data models have emerged: the Object-Oriented Data Model (OODM) and the Object-Relational Data Model (ORDM), previously referred to as the Extended Relational Data Model (ERDM).

Page 10: Database Trends – Past, present, Future

1010

Page 11: Database Trends – Past, present, Future

1111

Object Oriented DataBase (OODB)

When you integrate database capabilities with object programming language capabilities, the result is an object-oriented database management system or ODBMS¹. An ODBMS makes database objects appear as programming language objects in one or more existing programming languages. Object database management systems extend the object programming language with transparently persistent data, concurrency control, data recovery, associative queries, and other database capabilities.

Object oriented database (OODB) provides all the facilities associated with object oriented paradigm. It enables us to create classes, organize objects, structure an inheritance hierarchy and call methods of other classes. Besides these, it also provides the facilities associated with standard database systems. However, object oriented database systems have not yet replaced the RDBMS in commercial business applications.

Page 12: Database Trends – Past, present, Future

1212

Following are the two different approaches for designing an object-oriented database:

       Designed to store, retrieve and manage objects created by programs written in some object oriented languages (OOL) such as C++ or java.

 Although a relational database can be used to store and manage objects, it does not understand objects as such. Therefore, a middle layer called object manager or object-oriented layer software is required to translate objects into tuples of a relation.

       Designed to provide object-oriented facilities to users of non object-oriented programming languages (OOPLs) such as C or Pascal.

 The user will create classes, objects, inheritance and so on and the database system will store and manage these objects and classes. This second approach, thus, turns non-OOPLs into OOPLs. A translation layer is required to map the objects created by user into objects of the database system.

Page 13: Database Trends – Past, present, Future

1313

Examples

Object Database (ODBMS) for Java, written entirely in Java, and compliant with the Java Data Objects (JDO) standard developed by Sun.

db4o is designed to be a simple, easy-to-use, and fast, native object database. Software developers using popular Java and .NET object-oriented frameworks know that using an object oriented databases is a more natural way to get work done. Developers have three ways to storing and retrieving data:

Page 14: Database Trends – Past, present, Future

1414

Advantages of OODBMSs

Enriched modeling capabilities

The object-oriented data model allows the 'real world' to be modeled more closely.

Extensibility  OODBMSs allow new data types to be built from existing types. The ability to factor out common properties of several classes and form them into a superclass that can be shared with subclasses can greatly reduce redundancy within system

Removal of impedance mismatch

  A single language interface between the Data Manipulation Language (DML) and the programming language overcomes the impedance mismatch. This eliminates many of the inefficiencies that occur in mapping a declarative language such as SQL to an imperative language such as 'C'. Most OODBMSs provide a DML that is computationally complete compared with SQL, the standard language of RDBMSs.

Page 15: Database Trends – Past, present, Future

1515

More expressive query language

 Navigational access from the object is the most common form of data access in an OODBMS. This is in contrast to the associative access of SQL

Support for long-duration transactions

 Current relational DBMSs enforce serializability on concurrent transactions to maintain database consistency. OODBMSs use a different protocol to handle the types of long-duration transaction that are common in many advanced database application.

 Applicability to advanced database applications

There are many areas where traditional DBMSs have not been particularly successful, such as, Computer-Aided Design (CAD), CASE, Office Information Systems (OIS), and Multimedia Systems. The enriched modeling capabilities of OODBMSs have made them suitable for these applications.

Page 16: Database Trends – Past, present, Future

1616

Improved performance

 

There have been a number of benchmarks that have suggested OODBMSs provide significant performance improvements over relational DBMSs. The results showed an average 30-fold performance improvement for the OODBMS over the RDBMS.

Page 17: Database Trends – Past, present, Future

1717

Disadvantages of OODBMSs

Lack of universal data model There is no universally agreed data model for an OODBMS, and most models lack a theoretical foundation. This disadvantage is seen as a significant drawback, and is comparable to pre-relational systems.

Lack of experience In comparison to RDBMSs, the use of OODBMS is still relatively limited. This means that we do not yet have the level of experience that we have with traditional systems

Lack of standards There is a general lack of standards of OODBMSs. We have already mentioned that there is not universally agreed data model. Similarly, there is no standard object-oriented query language.

 Competition Perhaps one of the most significant issues that face OODBMS vendors is the competition posed by the RDBMS and the emerging ORDBMS products. These products have an established user base with significant experience available

Page 18: Database Trends – Past, present, Future

1818

Locking at object level may impact performance

 Many OODBMSs use locking as the basis for concurrency control protocol.

Complexity The increased functionality provided by the OODBMS makes the system more complex than that of traditional DBMSs.

Lack of support for views

  Currently, most OODBMSs do not provide a view mechanism,

Lack of support for security

  Currently, OODBMSs do not provide adequate security mechanisms. The user cannot grant access rights on individual objects or classes.

Page 19: Database Trends – Past, present, Future

1919

Object-Relational Database Systems

 An object relational database is also called an object relational database management system (ORDBMS). This system simply puts an object oriented front end on a relational database (RDBMS). When applications interface to this type of database, it will normally interface as though the data is stored as objects. However the system will convert the object information into data tables with rows and colums and handle the data the same as a relational database. Likewise, when the data is retrieved, it must be reassembled from simple data into complex objects.

Page 20: Database Trends – Past, present, Future

2020

Relational DBMSs are currently the dominant database technology. The OODBMS has also become the favored system for financial and telecommunications applications. Although the OODBMS market is still small. The OODBMS continues to find new application areas, such as the World Wide Web. Some industry analysts expect the market for the OODBMSs to grow at over 50% per year, a rate faster than the total database market. However, their sales are unlikely to overtake those of relational systems because of the wealth of businesses that find RDBMSs acceptable, and because businesses have invested so much money and resources in their development that change is prohibitive.

Page 21: Database Trends – Past, present, Future

2121

The choice of DBMS seemed to be between the relational DBMS and the object-oriented DBMS. However, many vendors of RDBMS products are conscious of the threat and promise of the OODBMS. They agree that traditional relational DBMSs are not suited to the advanced application. The most obvious way to remedy the shortcomings of the relational model is to extend the model with these types of feature. This is the approach that has been taken by many extended relational DBMSs, although each has implemented different combinations of features. Thus, there is no single extended relational model; rather, there are a variety of these models, whose characteristics depends upon the way and the degree to which extensions were made. However, all the models do share the same basic relational tables and query language, all incorporate some concept of 'object', and some have the ability to store methods (or procedures or triggers) as well as data in the database.

Page 22: Database Trends – Past, present, Future

2222

Page 23: Database Trends – Past, present, Future

2323

Advantages of ORDBMSs

Reuse and Sharing

The main advantages of extending the relational data model come from reuse and sharing. Reuse comes from the ability to extend the DBMS server to perform standard functionality centrally, rather than have it coded in each application.

Increased Productivity ORDBMS provides increased productivity both for the developer and for the end user.

 Use of experience in developing RDBMS

 Another obvious advantage is that the extended relational approach preserves the significant body of knowledge and experience that has gone into developing relational applications. This is a significant advantage, as many organizations would find it prohibitively expensive to change.

Page 24: Database Trends – Past, present, Future

2424

Disadvantages of ORDBMSs

 

The ORDBMS approach has the obvious disadvantages of complexity and associated increased costs. Further, there are the proponents of the relational approach that believe the essential simplicity and purity of the relational model are lost with these types of extension.

Performance Constraints

Because the ORDBMS converts data between an object oriented format and RDBMS format, speed performance of the database is degraded substantially. This is due to the additional conversion work the database must do.

Page 25: Database Trends – Past, present, Future

2525

Strengths of RDBMS, ODBMS and ORDBMS

The strengths of the various kinds of database systems can be summarized as follows:

       Relational systems: simple data types, powerful query languages, high protection.

       Persistent programming language based OODB’s: complex data types, integration with programming language, high performance.

       Object-relational systems: complex data types, powerful query languages, high protection.

 These descriptions hold in general, but keep in mind that some database systems blur these boundaries. For example, some object-oriented database systems built around a persistent programming language are implemented on top of a relational database system. Such systems may provide lower performance than object-oriented database systems built directly on a storage system, but provide some of the stronger protection guarantees of relational systems.

Page 26: Database Trends – Past, present, Future

2626

Page 27: Database Trends – Past, present, Future

2727

Page 28: Database Trends – Past, present, Future

2828

Page 29: Database Trends – Past, present, Future

2929

Advanced Database Applications

These applications include:

       Computer-Aided Design (CAD);

       Computer-Aided Manufacturing (CAM);

       Computer Aided Software Engineering (CASE);

       Network Management Systems;

       Office Information Systems (OIS) and Multimedia Systems;

       Digital Publishing;

       Geographic Information Systems (GIS);

       Interactive and Dynamic Web sites.

 These new applications cannot be easily designed in RDBMS due to its following weaknesses.

Page 30: Database Trends – Past, present, Future

3030

Weaknesses of RDBMS

 There are following weaknesses of RDBMS which leads to development of OODBMS:

       Poor representation of 'real world' entities

       Poor support for integrity and enterprise constraints

       Homogeneous data structure

       Limited operations

       Difficulty in handling recursive queries

       Impedance mismatch

    Other problems with RDBMSs is associated with concurrency, schema changes, and poor navigational access

Page 31: Database Trends – Past, present, Future

3131

Storing Objects in a Relational Database

For the purposes of discussion, consider the inheritance hierarchy which has a Staff superclass and three subclasses: Manager, SalesPersonnel, and Secretary.

Page 32: Database Trends – Past, present, Future

3232

Map each class or subclass to a relation

Staff(staffNo, fName, IName, position, sex, DOB, salary)

Manager (staffNo¸ bonus, mgrStartDate)

SalesPersonnel (staffNo, salesArea, carAllowance)

Secretary (staffNo, typingSpeed)

Map each subclass to a relation

Manager (staffNo, fName, IName, position, sex, DOB, salary, bonus, mgrStartDate)

 SalesPersonnel (staffNo, fName, IName, position, sex, DOB, salary, salesArea, carAllowance)

 Secretary (staffNo, fName, IName, position, sex, DOB, salary, typingSpeed)

Page 33: Database Trends – Past, present, Future

3333

Map the hierarchy to a single relation

 

Staff (staffNo, fName, IName, position, sex, DOB, salary, bonus, mgrStartDate, sales-Area, carAllowance, typingSpeed, typeFlag)

Page 34: Database Trends – Past, present, Future

3434

OORDBMS Approach

The best solution of all above problems in Object Oriented Relational Database Management System. For example: In Oracle 8,we can create an object Staff in database and then we can create three different tables namely Manager, Salepersonnel and Secretary, each refer to object created Staff as one of its attribute. The whole process of designing the above database with syntax of Oracle 8 is explained here below:

Creation of an object staff:

 CREATE TYPE staff AS OBJECT(

staffno number(4) primary key,

fname varchar(20), lname varchar(20), position varchar(10), sex varchar(1),DOB date,salary number(8,2);

 

Page 35: Database Trends – Past, present, Future

3535

The object staff can be referenced in table Manager as one of its attribute as shown below:

CREATE TABLE manager(

staff_detail staff,

bonus number(5),

mgrstartdate date);

The object staff can be referenced in table Manager as one of its attribute as shown below:

INSERT INTO manager VALUES

(staff(100,’ajay’,’arora’,’production manager’, ’m’,’25-feb-1976’,15000),2000,’12-dec-2002’);

Page 36: Database Trends – Past, present, Future

3636

The object staff can be referenced in table Manager as one of its attribute as shown below:

CREATE TABLE manager(

staff_detail staff,

bonus number(5),

mgrstartdate date);

Page 37: Database Trends – Past, present, Future

3737

Similarly we create other tables as shown below:

 

CREATE TABLE salepersonnel(

staff_detail staff,

salearea varchar(25),

carallowance number(7));

 

CREATE TABLE secretary(

staff_detail staff,

typingspeed number(3));

 

Page 38: Database Trends – Past, present, Future

3838

Future of Databases Future of Databases infrastructureinfrastructure

An IBM survey conducted in 2010 revealed different technology trends by 2015. The survey garnered responses from more than 2,000 IT professionals worldwide with expertise in areas such as software testing, system and network administration, software architecture, and enterprise and web application development. There were two main findings from the survey:

Cloud computing will overtake on-premise computing as the primary way organizations acquire IT resources.

Mobile application development for devices such as iPhone and Android, and even tablet PCs like iPad and PlayBook, will surpass application development on other platforms.

Page 39: Database Trends – Past, present, Future

3939

Cloud computingCloud computing Cloud computing is not a new technology, but a new model

to deliver IT resources. It gives the illusion that people can have access to an infinite amount of computing resources available on demand. With Cloud computing, you can rent computing power with no commitment. There is no need to buy a server, just pay for what you use. This new model is often compared with how people use and pay for utilities. For example, you only pay for how much water or electricity you consume for a given amount of time.

Cloud computing has drastically altered the way computing resources are obtained, allowing almost everybody, from one-man companies, large enterprises to governments work on projects that could not have been possible before.

Page 40: Database Trends – Past, present, Future

4040

Comparing traditional IT model Comparing traditional IT model to the Cloud computing modelto the Cloud computing model

TraditionalIT model Cloud computing model Capitalbudgetrequired Partofoperating expense Large upfrontinvestment Startat2cents/hour Plan forpeakcapacity Scale on demand 120 daysfora projectto start Lessthan 2 hoursto have a working system

While in the traditional IT model you need to request budget to acquire hardware, and invest a large amount of money upfront; with the Cloud computing model, your expenses are considered operating expenses to run your business; you pay on demand a small amount per hour for the same resources.

In the traditional IT model, requesting budget, procuring the hardware and software, installing it on a lab or data center, and configuring the software can take a long time. On average we could say a project could take 120 days or more to get started. With Cloud computing, you can have a working and configured system in less than 2 hours!

In the traditional IT model companies need to plan for peak capacity. For example if your company's future workload requires 3 servers for 25 days of the month, but needs 2 more servers to handle the workload of the last 5 days of the month then the company needs to purchase 5 servers, not 3. In the Cloud computing model, the same company could just invest on the 3 servers, and rent 2 more servers for the last 5 days of the month.

Page 41: Database Trends – Past, present, Future

4141

Characteristics of the Cloud

Cloud Computing is based on three simple characteristics:

Standardization. - Standardization provides the ability to build a large set of homogeneous IT resources mostly from inexpensive components. Standardization is the opposite of customization.

Virtualization - Virtualization provides a method for partitioning the large pool of IT resources and allocating them on demand. After use, the resources can be returned to the pool for others to reuse.

Automation - Automation allows users on the Cloud to have control on the resources they provision without having to wait for administrator to handle their requests. This is important in a large Cloud environment.

Page 42: Database Trends – Past, present, Future

4242

Cloud computing service models

There are three Cloud computing service models:

Infrastructure as a Service (IaaS) - Infrastructure as a Service providers take care of your infrastructure (Data center, hardware, operating system) so you don't need to worry about these.

Platform as a Service (PaaS) - Platform as a Services providers take care of your application platform or middleware. For example, in the case of IBM middleware products, a PaaS would take care of providing for a DB2 server, a WebSphere application server, and so on.

Software as a Service (SaaS) - Software as a Service providers take care of the application that you need to run. You can think of them as "application stores" where you go and rent applications you need by the hour. A typical example of a SaaS is Salesforce.com.

Page 43: Database Trends – Past, present, Future

4343

Top 10 Cloud computing providers of 2012 of 2012

1. Amazon : : Amazon Web Services (AWS) offerings

2. Rackspace : : 3.3. CenturyLink/SavvisCenturyLink/Savvis4.4. Salesforce.comSalesforce.com5.5. Verizon/TerremarkVerizon/Terremark6.6. JoyentJoyent7.7. CitrixCitrix8.8. BluelockBluelock9.9. MicrosoftMicrosoft10.10.VMwareVMware

Page 44: Database Trends – Past, present, Future

4444

Amazon Web Services

Amazon Web Services, or AWS, is the leading provider of public cloud infrastructure. AWS has data centers in four regions: US-East, US-West, Europe and Asia Pacific. Each region has multiple availability zones for improved business continuity.

With AWS you can select virtual servers, called Elastic Cloud compute (EC2) instances. These instances are Intel-based (32 and 64-bit), and can run Windows or Linux (numerous distributions) operating systems. Pick from the different pre-defined types of instances based on your need for CPU cores, memory and local storage.

Page 45: Database Trends – Past, present, Future

4545

Amazon web services - Contd

Figure : Summarizes the different AWS EC2 instance types. Each type has a different price per hour as documented at aws.amazon.com.

Page 46: Database Trends – Past, present, Future

4646

AWS EC2 instance types

AWS has three choices: Instance storage. : Instance storage is included

with your instance at no extra cost; however, data is not persistent which means that it would disappear if the instance crashes, or if you terminate it.

Simple Storage Service (S3). : S3 behaves like a file-based storage organized into buckets. You interact with it using http put and get requests.

Elastic Block Storage (EBS). :EBS volumes can be treated as regular disks on a computer. It allows for persistent storage, and is ideal for databases.

Page 47: Database Trends – Past, present, Future

4747

Handling security on the Cloud

Security ranks high when discussing the reasons why a company may not want to work on the public Cloud. The idea of having confidential data held by a third party Cloud provider is often seen as a security and privacy risk.

While these concerns may be valid, Cloud computing has evolved and keeps evolving rapidly. Private clouds provide a way to reassure customers that their data is held safely on-premises. Hardware and software such as IBM Cloudburst™ and IBM WebSphere Cloudburst Appliance work hand-in-hand to let companies develop their own cloud.

Companies such as Amazon and IBM offer virtual private cloud (VPC) services where servers are still located in the cloud provider’s data centers yet they are not accessible to the internet; security can be completely managed by the company's own security infrastructure and processes.

Companies can also work with hybrid clouds where they can keep critical data in their private cloud, while data used for development or testing can be stored on the public cloud.

Page 48: Database Trends – Past, present, Future

4848

Databases and the Cloud

Cloud Computing is a new delivery method for IT resources including databases. IBM DB2 data server is Cloud-ready in terms of licensing and features.

Different DB2 editions for production, development and test are available on AWS and the IBM developer cloud. DB2 images are also available for private clouds using VMware or WebSphere Cloudburst appliance.

Page 49: Database Trends – Past, present, Future

4949

DB2 images available on the private, hybrid and public

clouds

Page 50: Database Trends – Past, present, Future

5050

Licensing advantages on Licensing advantages on cloudcloud

Bring your own license (BYOL) allows you to use your existing DB2 licenses on the cloud.

Pay as you go (PAYG) allows you to pay for what you use.

You can always use DB2 Express-C on the Cloud at no charge, though you may have to pay the Cloud provider for using its infrastructure.

Page 51: Database Trends – Past, present, Future

5151

DB2 features advantages on DB2 features advantages on cloudcloud

Database Partitioning Feature (DPF) High Availability Disaster Recovery

(HADR) Compression

Page 52: Database Trends – Past, present, Future

5252

Trends in programming languagesTrends in programming languages

Page 53: Database Trends – Past, present, Future

5353

First-generation programming First-generation programming languagelanguage

A first-generation programming language is a machine-level programming language.A first-generation programming language is a machine-level programming language.

Originally, no translator was used to compile or assemble the first-generation language. The first-Originally, no translator was used to compile or assemble the first-generation language. The first-generation programming instructions were entered through the front panel switches of the generation programming instructions were entered through the front panel switches of the computer system.computer system.

Page 54: Database Trends – Past, present, Future

5454

Second-generation programming Second-generation programming languagelanguage

Second-generation programming language is a generational way to Second-generation programming language is a generational way to categorise assembly languages. The term was coined to provide a categorise assembly languages. The term was coined to provide a distinction from higher level third-generation programming distinction from higher level third-generation programming languages (3GL) such as COBOL and earlier machine code languages. languages (3GL) such as COBOL and earlier machine code languages. Second-generation programming languages have the following Second-generation programming languages have the following properties:properties:

The code can be read and written by a programmer. To run on a The code can be read and written by a programmer. To run on a computer it must be converted into a machine readable form, a computer it must be converted into a machine readable form, a process called assembly.process called assembly.

The language is specific to a particular processor family and The language is specific to a particular processor family and environment.environment.

Second-generation languages are sometimes used in kernels and Second-generation languages are sometimes used in kernels and device drivers (though C is generally employed for this in modern device drivers (though C is generally employed for this in modern kernels), but more often find use in extremely intensive processing kernels), but more often find use in extremely intensive processing such as games, video editing, graphic manipulation/rendering.such as games, video editing, graphic manipulation/rendering.

One method for creating such code is by allowing a compiler to One method for creating such code is by allowing a compiler to generate a machine-optimised assembly language version of a generate a machine-optimised assembly language version of a particular function. This code is then hand-tuned, gaining both the particular function. This code is then hand-tuned, gaining both the brute-force insight of the machine optimizing algorithm and the brute-force insight of the machine optimizing algorithm and the intuitive abilities of the human optimiser.intuitive abilities of the human optimiser.

Page 55: Database Trends – Past, present, Future

5555

Third-generation programming Third-generation programming languagelanguage

A third-generation programming language (3GL) is a refinement of a second-generation programming A third-generation programming language (3GL) is a refinement of a second-generation programming language. The second generation of programming languages brought logical structure to software. The language. The second generation of programming languages brought logical structure to software. The third generation brought refinements to make the languages more programmer-friendly. This includes third generation brought refinements to make the languages more programmer-friendly. This includes features like improved support for aggregate data types, and expressing concepts in a way that features like improved support for aggregate data types, and expressing concepts in a way that favours the programmer, not the computer (e.g. no longer needing to state the length of multi-favours the programmer, not the computer (e.g. no longer needing to state the length of multi-character (string) literals in Fortran). A third generation language improves over a second generation character (string) literals in Fortran). A third generation language improves over a second generation language by having the computer take care of non-essential details, not the programmer. "High level language by having the computer take care of non-essential details, not the programmer. "High level language" is a synonym for third-generation programming language.language" is a synonym for third-generation programming language.

First introduced in the late 1950s, Fortran, ALGOL, and COBOL are early examples of this sort of First introduced in the late 1950s, Fortran, ALGOL, and COBOL are early examples of this sort of language.language.

Most popular general-purpose languages today, such as C, C++, C#, Java, BASIC and Pascal, are also Most popular general-purpose languages today, such as C, C++, C#, Java, BASIC and Pascal, are also third-generation languages.third-generation languages.

Most 3GLs support structured programming.Most 3GLs support structured programming.

A programming language such as C, FORTRAN, or Pascal enables a programmer to write programs that A programming language such as C, FORTRAN, or Pascal enables a programmer to write programs that are more or less independent from a particular type of computer. Such languages are considered high-are more or less independent from a particular type of computer. Such languages are considered high-level because they are closer to human languages and further from machine languages. In contrast, level because they are closer to human languages and further from machine languages. In contrast, assembly languages are considered low-level because they are very close to machine languages.assembly languages are considered low-level because they are very close to machine languages.

The main advantage of high-level languages over low-level languages is that they are easier to read, The main advantage of high-level languages over low-level languages is that they are easier to read, write, and maintain. Ultimately, programs written in a high-level language must be translated into write, and maintain. Ultimately, programs written in a high-level language must be translated into machine language by a compiler or interpreter.machine language by a compiler or interpreter.

The first high-level programming languages were designed in the 1950s. Examples of high level The first high-level programming languages were designed in the 1950s. Examples of high level languages are ALGOL, COBOL, FORTRAN, and Ada. These programs could run on different machines so languages are ALGOL, COBOL, FORTRAN, and Ada. These programs could run on different machines so they were machine-independentthey were machine-independent

Page 56: Database Trends – Past, present, Future

5656

Fourth-generation programming Fourth-generation programming language language

The term fourth-generation programming language (1970s-1990) (abbreviated 4GL) is The term fourth-generation programming language (1970s-1990) (abbreviated 4GL) is better understood to be a fourth generation environment; packages of systems better understood to be a fourth generation environment; packages of systems development software including very high level programming languages. A very high level development software including very high level programming languages. A very high level programming language and a development environment or 'Analyst Workbench' designed programming language and a development environment or 'Analyst Workbench' designed with a central data dictionary system, a library of loosely coupled design patterns, a CRUD with a central data dictionary system, a library of loosely coupled design patterns, a CRUD generator, report generator, end-user query language, DBMS, visual design tool and generator, report generator, end-user query language, DBMS, visual design tool and integration API. Historically often used for prototyping and evolutionary development of integration API. Historically often used for prototyping and evolutionary development of commercial business software. In the history of computer science, the 4GL followed the commercial business software. In the history of computer science, the 4GL followed the 3GL in an upward trend toward higher abstraction and statement power. The 4GL was 3GL in an upward trend toward higher abstraction and statement power. The 4GL was followed by efforts to define and use a 5GL.followed by efforts to define and use a 5GL.

The natural-language, block-structured mode of the third-generation programming The natural-language, block-structured mode of the third-generation programming languages improved the process of software development. However, 3GL development languages improved the process of software development. However, 3GL development methods can be slow and error-prone. It became clear that some applications could be methods can be slow and error-prone. It became clear that some applications could be developed more rapidly by adding a higher-level programming language and methodology developed more rapidly by adding a higher-level programming language and methodology which would generate the equivalent of very complicated 3GL instructions with fewer which would generate the equivalent of very complicated 3GL instructions with fewer errors. In some senses, software engineering arose to handle 3GL development. 4GL and errors. In some senses, software engineering arose to handle 3GL development. 4GL and 5GL projects are more oriented toward problem solving and systems engineering.5GL projects are more oriented toward problem solving and systems engineering.

All 4GLs are designed to reduce programming effort, the time it takes to develop software, All 4GLs are designed to reduce programming effort, the time it takes to develop software, and the cost of software development. They are not always successful in this task, and the cost of software development. They are not always successful in this task, sometimes resulting in inelegant and un maintainable code. However, given the right sometimes resulting in inelegant and un maintainable code. However, given the right problem, the use of an appropriate 4GL can be spectacularly successful as was seen with problem, the use of an appropriate 4GL can be spectacularly successful as was seen with MARK-IV and MAPPER. The usability improvements obtained by some 4GLs (and their MARK-IV and MAPPER. The usability improvements obtained by some 4GLs (and their environment) allowed better exploration for heuristic solutions than did the 3GL.environment) allowed better exploration for heuristic solutions than did the 3GL.

Page 57: Database Trends – Past, present, Future

5757

Fifth-generation programming Fifth-generation programming language language

A fifth-generation programming language (abbreviated 5GL) is a programming language based on A fifth-generation programming language (abbreviated 5GL) is a programming language based on solving problems using constraints given to the program, rather than using an algorithm written by solving problems using constraints given to the program, rather than using an algorithm written by a programmer. Most constraint-based and logic programming languages and some declarative a programmer. Most constraint-based and logic programming languages and some declarative languages are fifth-generation languages.languages are fifth-generation languages.

While fourth-generation programming languages are designed to build specific programs, fifth-While fourth-generation programming languages are designed to build specific programs, fifth-generation languages are designed to make the computer solve a given problem without the generation languages are designed to make the computer solve a given problem without the programmer. This way, the programmer only needs to worry about what problems need to be programmer. This way, the programmer only needs to worry about what problems need to be solved and what conditions need to be met, without worrying about how to implement a routine or solved and what conditions need to be met, without worrying about how to implement a routine or algorithm to solve them. Fifth-generation languages are used mainly in artificial intelligence algorithm to solve them. Fifth-generation languages are used mainly in artificial intelligence research. Prolog, OPS5, and Mercury are examples of fifth-generation languages.research. Prolog, OPS5, and Mercury are examples of fifth-generation languages.

These types of languages were also built upon Lisp, many originating on the Lisp machine, such as These types of languages were also built upon Lisp, many originating on the Lisp machine, such as ICAD. Then, there are many frame languages, such as KL-ONE.ICAD. Then, there are many frame languages, such as KL-ONE.

In the 1980s, fifth-generation languages were considered to be the wave of the future, and some In the 1980s, fifth-generation languages were considered to be the wave of the future, and some predicted that they would replace all other languages for system development, with the exception predicted that they would replace all other languages for system development, with the exception of low-level languages. Most notably, from 1982 to 1993 Japan put much research and money into of low-level languages. Most notably, from 1982 to 1993 Japan put much research and money into their fifth generation computer systems project, hoping to design a massive computer network of their fifth generation computer systems project, hoping to design a massive computer network of machines using these tools.machines using these tools.

However, as larger programs were built, the flaws of the approach became more apparent. It turns However, as larger programs were built, the flaws of the approach became more apparent. It turns out that starting from a set of constraints defining a particular problem, deriving an efficient out that starting from a set of constraints defining a particular problem, deriving an efficient algorithm to solve it is a very difficult problem in itself. This crucial step cannot yet be automated algorithm to solve it is a very difficult problem in itself. This crucial step cannot yet be automated and still requires the insight of a human programmer.and still requires the insight of a human programmer.

Today, fifth-generation languages are back as a possible level of computer language. A number of Today, fifth-generation languages are back as a possible level of computer language. A number of software vendors currently claim that their software meets the visual "programming" requirements software vendors currently claim that their software meets the visual "programming" requirements of the 5GL concept.of the 5GL concept.

Page 58: Database Trends – Past, present, Future

5858

Any Questions ?Any Questions ?

Please call or email me Please call or email me

9701572220 – M9701572220 – M

[email protected]@gmail.com

Page 59: Database Trends – Past, present, Future

5959

ThanksThanks