acknowledgement - sumantsharma.files.wordpress.com · web viewopc is the preferred connectivity...
TRANSCRIPT
Page
1
PROJECT REPORT
PROJECT DETAILS: OBJECT LINKING & EMBEDDING FOR PROCESS CONTROL (OPC)
COMPANY: L&T KNOWLEDGE CITY L&T SARGENT & LUNDY LTD, WAGHODIYA CROSSING, VADODARA, GUJARAT.
PREPARED BY : ABHIMANYU ABMEGAONKAR - (07-ICG-60)
SUMANT SHARMA -(07-ICG-61)
ACKNOWLEDGEMENT
Page
2
We would like to thank our internal project guide Mr. Dipesh shah, lecturer, Instrumentations & control Dept., S.V.I.T- Vasad for inspiring us to make a project which helped us gain more knowledge in our own engineering field.
This dissertation would not have been successful without Mr. Chandresh Sharma, Asst. General Manager-EngineeringControl & Instrumentation-Power Projects, L&T Sargent & Lundy Ltd., whose constant guidance at the company helped us tremendously for our project.
While concluding, we would like to thank the team at L&T Sargent & Lundy Ltd., who made our training period a memorable experience and a great success.
Abhimanyu Ambegaonkar-(07 ICG 60)Sumant Sharma -(07 ICG 61)
B.E. I&C,
S.V.I.T.- VASAD.
COMPANY PROFILE
Page
3
L&T-Sargent & Lundy (L&T-S&L)Vadodara – L&T-Sargent & Lundy (L&T-S&L) located in Vadodara is a Private Sector entity that offers products/services in Power/Energy. L&T-Sargent & Lundy (L&T-S&L) has employee strength of 250-500.
ADDRESS : L&T Knowledge City Ajwa Waghobia Crossing NH 8, Vadodara 390002 Gujarat, India.
TELEPHONE : (0265) 2455000
WEB SITE : www.lntsnl.com
TABLE OF CONTENTS
Page
4
1. Introduction 1.1 Introduction…………………………………………………………………….....5
2. OPC FOUNDATION…………………………………………………………….......72.1 History……………………………………………………………………….........7
2.2 General Information of the OPC…………………………………………….........10 2.3 OPC Server and Clients……………………………………………………….…..11
3. OPC Definition…………………………………………………………………….…...133.1 Block Diagram & Explanation……………………………………………….….....13
4. Architecture 4.1 OPC Architecture…………………………………………………………………...18 4.2 OPC Data Model…………………………………………………………………....20 4.3 OPC for Industrial Automation…………………………………………………......22
5. Network Management 5.1 Network Management in pre-OPC era……………………………………………...23 5.2 Network Management in OPC era…………………………………………………..24 5.3 Network Communication with OPC………………………………………………...25 5.4 Difference Between OPC & pre OPC Operability…………………………………..26
6. OPC Technology……………………………………………………………………….....27 6.1 Object Linking & Embedding……………………………………………………....28 6.2 Component Object Modeling……………………………………………………......31 6.3 Distributed Component Object Modeling…………………………………………...32 6.4 Distributed Component Object Modeling Architecture…………………………......33
7. OPC Interface…………………………………………………………………………......34 7.1 Server client Interface…………………………………………………………….....35
8. OPC Server Configuration 8.1 OPC Data Access………………………………………………………………........36 8.2 OPC Alarm & Events……………………………………………………………….39 8.3 OPC Historical Data Access………………………………………………………...43
9. OPC Data Hub 9.1 Advanced OPC Tunneling………………………………………………………..…..48 9.2 Load Balancing between computer……………………………………………...…...53 9.3 Example of Advanced Tunneling………………………………………………….....54
10. OPC Application Capacity………………………………………………………………..55
11. OPC System Requirement……………………………………………………………...…56
12. Conclusion……………………………………………………………………………...…57
13. Glossary………………………………………………………………………………...…58
Chapter 1: INTRODUCTION
Page
5
OPC stands for Object Linking & Embedding for Process Control.
OPC is a standard, manufacturer-independent programming interface through which an automation application client such as a human interface can access the plant data coming from remote devices, such as programmable logic controllers, field bus devices or real-time databases.
To that effect, the manufacturer of automation devices supplies an OPC server that runs on a PC, which communicates with its devices through a proprietary protocol. An OPC Server can manage several devices of the same type. Several servers can run in parallel and each server can be accessed by several clients, which run on the same PC or in the same network. All OPC servers present the process variables in the same format to their clients as a uniform interface.
This interface consists of a set of commands collected in a software library (DLL) that can be called by client applications written in Visual Basic, C# or other Microsoft programming languages (even Excel) and which access the OPC servers.
The OPC library allows in particular reading and writing process variables, read alarms and events and acknowledging alarms, and retrieving historical data from data bases according to several criteria.
Automation platforms such as ABB's 800XA platform act as OPC clients to collect data from PLCs or databases through third-party OPC servers. Several automation platforms act themselves as an OPC server to publish their data, events and historical data.
Page
6
OPC is the preferred connectivity for 75% of HMI / SCADA, 68% of DCS / PLC and 53% or ERP /Enterprise system level applications.
Page
7
Chapter 2
The OPC Foundation
The OPC Foundation is dedicated to ensuring interoperability in automation by creating and maintaining open specifications that standardize the communication of acquired process data, alarm and event records, historical data, and batch data to multi-vendor enterprise systems and between production devices. Production devices include sensors, instruments, PLCs, RTUs, DCSs, HMIs, historians, trending subsystems, alarm subsystems, and more as used in the process industry, manufacturing, and in acquiring and transporting oil, gas, and minerals.
The Vision of the OPC Foundation is to provide the best technology, specifications, processes and tools to really enable companies to build products and services that truly demonstrate multiplatform multi-vendor secure reliable interoperability. OPC Foundation members benefit by being able to take advantage of the technology and marketing necessary to become the leaders in the industry supporting industrial standards for industrial automation and beyond.
2.1 History
The Foundation has over 400 members from around the world, including nearly all of the world's major providers of control systems, instrumentation, and process control systems. The OPC Foundation's forerunner - a task force composed of Fisher-Rosemount, Rockwell Software, Opto 22, Intellution, and Intuitive Technology - was able to develop a basic, workable, OPC specification after only a single year's work. A simplified, stage-one solution was released in August 1996. The members of the task force included the legendary people: Al Chisholm, David Rehbein, Thomas Burke, Neil Petersen, Paul VanSlette, Phil White, Rich Harrison, Tom Quinn.
These guys all worked for companies that were competitors of each
Page
8
other, but they all quickly established great friendships and great relationships and focused at the task of developing this specification that was built on solid technology for interoperability. Sample code came first, then the specification essentially documented the sample code. The OPC task force made sure that everything was feasible and exceeded the expectations of all of the vendors to eliminate any excuses about adoption and building real products. This was not an academic exercise in futility; this was about developing technology that multiple vendors would quickly adopt in the interest of multi-vendor interoperability.
The OPC Foundation has been able to work more quickly than many other standards groups because OPC Foundation is building on existing, computer industry standards. Other groups which have had to define standards "from the ground up" have had a more difficult time reaching consensus as a result of the scope of their work.
OPC started as a vendor driven initiative to solve the simple device driver problem, where the first-tier visualization and SCADA applications needed to have a standard way for reading and writing data from devices on the factory floor and DCS systems in process control. The name OPC specifically stood for OLE for Process Control, and quickly changed over the first six months when the opportunity for standardization in industrial automation was quickly realized as being utilized beyond process control. Factory automation and process control standardized quickly on the OPC technology. OPC became the most successful industry standard actually adopted in industrial automation from a software perspective.
When we first started OPC the thought pattern was the hardware companies would always build the OPC servers for their own respective hardware since they understood the intimate details of communicating to their respective devices. The major HMI vendors would then build the OPC clients and magically OPC would provide a standardized communication interface for the hardware and software to easily work together in a seamless fashion. Magically it all came together as OPC quickly release the first specification as a draft within six months from conception to completion. Within the first year, of the initial OPC data access specification being finalized, there was a significant groundswell
Page
9
of hardware and software vendors that all had OPC is the standard mechanism for interoperability for Microsoft platforms. Vendors and system integrators realized a significant opportunity with OPC opening the door for interoperability. OPC created a cottage industry whereby many companies were started using the OPC technology as their infrastructure for getting their foot into the door into industrial automation. Software vendors started building a OPC server products, and actually began developing better OPC servers for other people's hardware and hardware manufacturers themselves. System integrators started to build their own custom OPC client applications because we provided a standard way for them to easily develop applications that would be able to communicate with any hardware in factory automation or process control. It was so easy it became dangerous everyone and their brother became an OPC expert. So the OPC Foundation then put together the necessary infrastructure to begin doing interoperability and certification testing and validation.
The OPC Foundation created a vision of interoperability based on a solid principle of success is measured by the level of adoption of technology. Members have the unique opportunity to take advantage of the significant marketing and technical tools that the OPC Foundation provides to enable rapid deployment and certification of products based on the technology.
We are interested in your feedback and want to know the things that are important to you. We are truly committed to providing the best value for our members and nonmembers alike with respect to providing the best technology, specifications, certification and process to enable plug-and-play multivendor multiplatform secure reliable interoperability.
Page
10
2.2 General information of the OPC
OPC is a standardized interface for accessing process data. It is based on the Microsoft COM/DCOM standard and has been expanded according to the requirements when accessing data in the field of Automation. Here, it is primarily used to read/write values from/to the controller. Typically, OPC clients are visualizations or programs for the acquisition of operating data, etc. OPC servers are usually provided for PLC systems and field bus cards. The OPC server is not a passive subprogram library, but an executable program which is started when the connection between client and server is established. This is why it is able to notify the client when the value or status of a variable has changed. Due the characteristics of DCOM, it is even possible to access an OPC server which is running on another computer. Furthermore, a data source can be simultaneously accessed by more than one client via OPC. Another advantage OPC gains by the usage of COM is that different programming languages (C++, Visual Basic, Delphi, Java) can be used. However, a resulting disadvantage is the considerably higher usage of resources (memory and CPU time).
OPC (original acronym for OLE for Process Control) is a standard. It is not a product, it is not software code. It is a standard presented to developers of control systems and software applications for enabling communications between process control systems and process control applications. It exists because there are features inherent in process control applications that are not sufficiently addressed by other existing communications standards such as OLE, DDE, and others. It provides communications at the application layer, not for field I/O and hardware devices. That is the realm of Fieldbus, Profibus, Modbus, and others. OPC is built on and relies upon the Microsoft Windows architecture. It specifically is dependent on DCOM (Distributed Component Object Model), a Microsoft mechanism for inter-application communication on a network. OPC built on DCOM does not work on Unix, Linux, Apple, or other non-Windows systems, although plans for portability are being designed in future enhancement of the standard. On one side of OPC communications is an OPC server application. This application “exposes” data items in a process control system or application.
Page
11
2.3 OPC Server and OPC Client
All of the OPC Specifications are based on the OPC Client/Server model. Client/Server describes the relationship between two computer applications in which one application, the OPC client, makes a service request from another application, the OPC Server, which fulfils the request. Although the OPC Client/Server model can be used within a single computer, when used in a network it provides a versatile and modular infrastructure that offers flexibility, interoperability, and scalability. This model is different from other common distributed architectures such as Master/Slave (or Primary/Secondary) and peer-to-peer networks.
In the Master/Slave (or Primary/Secondary) model. The Master application controls one or more other applications, the Slaves. Once the Master/Slave relationship is established, the direction of control is always from the master to the Slave(s). Peer-to-peer is a communication model in which each party has the same capabilities and either party can initiate a communication session.
So, what is an OPC Server and what does it do? An OPC Server is a software application that has been written to one of the OPC Specifications. An OPC Server will respond to requests, and provide data to one or more OPC Clients in a standard, consistent manner. Any compliant OPC Client can interface with, and request data from any complaint OPC Server, regardless of the vendor, or the underlying system providing the data. The original audience for OPC Clients/Servers was the Process Automation industry, to provide a standard interface to industrial devices, such as a PLC, DCS, HMI, SCADA, RTU or DAS. Since requiring a standard interface to obtain data from a system is not unique to this industry, OPC Servers are now available for countless other systems including historians, relational databases, RFID scanners, file systems, enterprise applications, custom devices, building control systems, IT networks, robots, even road signs.
The primary OPC Specifications, OPC Data Access (OPC DA), OPC Historical Data Access (OPC HDA) and OPC Alarms & Events (OPC A&E) are based on Microsoft COM (and DCOM), which is also based
Page
12
on the Client/Server model. From a programmatic point of view, the terms OPC Client and COM Client, and OPC Server and COM Server can be used interchangeably. Every OPC Client/Server is also a COM Client/Server. In sort, an OPC Server provides a set of standard interfaces, properties and methods, such than any OPC Client can connect/disconnect, obtain information on what data is available, and read/write data in a standard manner.
Page
13
Chapter 3
3.1 What is OPC?
OPC is open connectivity via open standards. They fill a need in automation like printer drivers did for Windows. See the summary of current and emerging OPC Specifications and OPC Certification.
OPC is open connectivity in industrial automation and the enterprise systems that support industry. Interoperability is assured through the creation and maintenance of open standards specifications. There are currently seven standards specifications completed or in development. Based on fundamental standards and technology of the general computing market, the OPC Foundation adapts and creates specifications that fill industry-specific needs.OPC will continue to create new standards as needs arise and to adapt existing standards to utilize new technology.
OPC server
X
application(OPC client)
OPC server(simulator)
OPC server
Y
nodeservers
PLCs Brand X
Field bus
Ethernet
Sensors/
Actuators
interfaces
covered by the
OPC standar
d
Page
14
OPC is a series of standards specifications. The first standard (originally called simply the OPC Specification and now called the Data Access Specification) resulted from the collaboration of a number of leading worldwide automation suppliers working in cooperation with
Microsoft. Originally based on Microsoft's OLE COM (component object model) and DCOM (distributed component object model) technologies, the specification defined a standard set of objects, interfaces and methods for use in process control and manufacturing automation applications to facilitate interoperability. The COM/DCOM technologies provided the framework for software products to be developed. There are now hundreds of OPC Data Access servers and clients.
Everyone's favorite analogy for needing the original Data Access Specification is printer drivers in DOS and then in Windows. Under DOS the developer of each application had to also write a printer driver for every printer. So AutoCAD wrote the AutoCAD application and the printer drivers. And WordPerfect wrote the WordPerfect application and the printer drivers. They had to write a separate printer driver for every printer they wanted to support: one for an Epson FX-80 and one for the H-P LaserJet, and on and on. In the industrial automation world, Intellection wrote their Human Machine Interface (HMI) software and a proprietary driver to each industrial device (including every PLC brand). Rockwell wrote their HMI and a proprietary driver to each industrial device (including every PLC brand, not just their own).
Windows solved the printer driver problem by incorporating printer support into the operating system. Now one printer driver served all the applications! And these were printer drivers that the printer manufacturer wrote (not the application developer).
Windows provided the infrastructure to allow the industrial device driver's solution as well. Adding the OPC specification to Microsoft's OLE technology in Windows allowed standardization. Now the industrial devices' manufacturers could write the OPC DA Servers and the software (like HMIs) could become OPC Clients.
Page
15
The resulting selfish benefit to the software suppliers was the ability to reduce their expenditures for connectivity and focus them on the core features of the software. For the users, the benefit was flexibility. They could now choose software suppliers based on features instead of "Do they have the driver to my unique device?" They don't have to create a custom interface that they must bear the full cost of creating and upgrading through operating system or device vendor changes. Users were also assured of better quality connectivity as the OPC DA Specification codified the connection mechanism and compliance testing. OPC interface products are built once and reused many times; hence, they undergo continuous quality control and improvement.
The user's project cycle is shorter using standardized software components. And their cost is lower. These benefits are real and tangible. Because the OPC standards are based in turn upon computer industry standards, technical reliability is assured. The original specification standardized the acquisition of process data. It was quickly realized that communicating other types of data could benefit from standardization. Standards for Alarms & Events, Historical Data, and Batch data were launched.
Current and emerging OPC Specifications include:
OPC Data AccessThe originals! Used to move real-time data from PLCs, DCSs, and other control devices to HMIs and other display clients. The Data Access 3 specification is now a Release Candidate. It leverages earlier versions while improving the browsing capabilities and incorporating XML-DA Schema.OPC Alarms & EventsProvides alarm and event notifications on demand (in contrast to the continuous data flow of Data Access). These include process alarms, operator actions, informational messages, and tracking/auditing messages.
OPC Batch
Page
16
This spec carries the OPC philosophy to the specialized needs of batch processes. It provides interfaces for the exchange of equipment capabilities
OPC Data ExchangeThis specification takes us from client/server to server-to-server withcommunication across Ethernet fieldbus networks. This provides multi-vendor interoperability! And, oh by the way, adds remote configuration, diagnostic and monitoring/management services.
OPC Historical Data AccessWhere OPC Data Access provides access to real-time, continually changing data, OPC Historical Data Access provides access to data already stored. From a simple serial data logging system to a complex SCADA system, historical archives can be retrieved in a uniform manner.
OPC SecurityAll the OPC servers provide information that is valuable to the enterprise and if improperly updated, could have significant consequences to plant processes. OPC Security specifies how to control client access to these servers in order to protect this sensitive information and to guard against unauthorized modification of process parameters.
OPC XML-DAProvides flexible, consistent rules and formats for exposing plant floor data using XML, leveraging the work done by Microsoft and others on SOAP and Web Services.
OPC Complex DataA companion specification to Data Access and XML-DA that allows servers to expose and describe more complicated data types such as binary structures and XML documents.
OPC Commands
Page
17
A Working Group has been formed to develop a new set of interfaces that allow OPC clients and servers to identify, send and monitor control commands which execute on a device.
OPC Unified ArchitectureA new set of specifications that are not based on Microsoft COM that will provide Standards based cross-platform capability.
SPECIFIACATION DETAILS OF OPC CONFIGURATIONS
Specification Description
OPC Data Access
The originals. Used to move real-time data from PLCs, DCSs, and other control devices to HMIs and other display clients. The Data Access specification is now a Release Candidate. It leverages earlier versions while improving the browsing capabilities and incorporating XML-DA Schema.
OPC Alarms & Events
Provides alarm and event notifications on demand (in contrast to the continuous data flow of Data Access). These include process alarms, operator actions, informational messages, and tracking/auditing messages.
OPC BatchThis specification carries the OPC philosophy to the specialized needs of batch processes. It provides interfaces for the exchange of equipment capabilities (corresponding to the S88.01 Physical Model) and current operating conditions.
OPC Data exchange
This specification takes us from client/server to server-to-server with communication across Ethernet fieldbus networks. This provides multi-vendor interoperability! And adds remote configuration, diagnostic and monitoring/management services.
OPC Historical Data Access
Where OPC Data Access provides access to real-time, continually changing data, OPC Historical Data Access provides access to data already stored. From a simple serial data logging system to a complex SCADA system, historical archives can be retrieved in a uniform manner.
OPC Security
All the OPC servers provide information that is valuable to the enterprise and if improperly updated, could have significant consequences to plant processes. OPC Security specifies how to control client access to these servers in order to protect this sensitive information and to guard against unauthorized modification of process parameters.
OPC XML-DAProvides flexible, consistent rules and formats for exposing plant floor data using XML, leveraging the work done by Microsoft and others on SOAP and Web Services.
OPC Complex Data
A companion specification to Data Access and XML-DA that allows servers to expose and describe more complicated data types such as binary structures and XML documents.
Page
18
OPC Commands
A Working Group has been formed to develop a new set of interfaces that allow OPC clients and servers to identify, send and monitor control commands which execute on a device.
Chapter 4
4.1 The OPC Architecture
One of the most important things to understand about OPC is that it is an Application Programming Interface (API) and not an “on the wire” protocol. It is at a higher level of abstraction than communications protocols such as Ethernet, TCP/IP or the even the MODBUS Application Protocol. For most developers using the OPC API, the underlying network transport or data encoding used by the API to exchange data is irrelevant.
Page
19
OPC LayeringAs Figure shows, underlying OPC are three very critical communicationsProtocols: COM, DCOM and RPC.Component Object Model (COM) is a successor to Dynamic Link Libraries (DLLs) and is a software architecture developed by Microsoft to build Component-based applications. It allows programmers to
encapsulate reusable pieces of code in such a way that other applications can use them without having to worry about implementation details. In this way, COM objects can be replaced by newer versions without having to rewrite the applications using them.
Distributed Component Object Model (DCOM) is a network-aware version of COM. It tries to hide the difference between invoking local (i.e. on the same computer) and remote interfaces (i.e. two different computers) from software developers. In order to do this, all the parameters must be passed by value and the returned value must also be passed by value. The process of converting the parameters to data to be transferred over the wire is called marshalling. Once marshalling is completed the data stream is serialized, transmitted and finally restored to its original data ordering on the other end of the connection.
DCOM uses the mechanism of Remote Procedure Calls (RPCs) totransparently send and receive information between COM components (i.e. clients and servers) on the same network. RPC allows system developers to control remote execution of programs without the need to develop specific procedures for the server. The client program sends a message to the server with the appropriate arguments and the server returns a message containing the results of the executed program.
Page
20
4.2 OPC Data Model
The information available from the OPC server is organized into groups of related items for efficiency. Servers can contain multiple groups of items, andA group can either be:• A public group, available for access by any client,• A local group, only accessible by the client that created it.
In Figure above we expand our earlier example to include two PLC’s
Page
21
connecting to a computer running one or more OPC servers maintaining grouped information. The PLCs and the OPC servers communicate using the native PLC protocol while the OPC clients running on the other computers access the data in the OPC server via DCOM.
Figure: OPC InteractionIn the earlier example, where a MODBUS/TCP OPC server was connected toa MODBUS capable PLC, we might configure a “WaterLevel” group on anHMI with five members:1. “SP” (set point),2. “CO” (control output),3. “PV” (process variable),4. “LoAlarm” (Low Water Alarm).5. “HiAlarm” (High Water Alarm).
The HMI could register the “Water Level” group with the SP, CO PV and Alarm members; then read the current values for all five items either at timed intervals or by exception (i.e. when their values changed). The HMI could also have write access to the “SP” set point variable.
One significant advantage of OPC is that we do not have to directly deal with the control device’s internal architecture. The software can deal with named items and groups of items instead of dealing with raw register numbers and data types. This also allows for an easier job adding or changing control systems, such as when migrating from a proprietary protocol to an Ethernet-based protocol, without altering the client applications.
Page
22
4.3 OPC Delivers Plug and Play Connectivity for Industrial Automation
For every automation system installed today, a significant amount of time and money is spent ensuring that the system can share timely information with other systems and devices throughout the manufacturing enterprise. OPC defines an open industry-standard interface based on ActiveX and OLE technology that provides interoperability between disparate field devices, automation/control, and business systems.
In years past, automation suppliers and systems integrators have developed numerous proprietary interfaces to their devices and control systems to permit access to real-time information. Over time, Microsoft’s dynamic data exchange (DDE) became a de facto standard interface for many types of automation devices such as programmable logic controllers (PLCs). Automation devices and systems used DDE protocol, and other derivatives, such as Net DDE, as a mechanism to pass data between applications, such as from a spreadsheet to a word processor, with some success. However, many end users found performance and reliability limitations when using DDE to pass real-time information between devices in their control system.
Several years ago, Microsoft replaced DDE with a higher performance, more robust, and reliable data exchange technology – object linking and embedding (OLE). Based on the component object model (COM), OLE is part of a broad software infrastructure also referred to as ActiveX.
Page
23
ActiveX and distributed component object model (DCOM) are important pieces of Microsoft’s client/server, distributed computing strategy. With ActiveX and DCOM, application software can interoperate and communicate between modules distributed across a computer network. However, to effectively use ActiveX as part of a software integration framework for manufacturing, automation suppliers must agree on a standard way to use ActiveX. A number of leading automation hardware and software suppliers working in cooperation with Microsoft collaborated to define a new standard for using ActiveX in manufacturing applications, OLE for process control (OPC).
Chapter 5
NETWORK MANAGEMENT
5.1 Network Management in the Pre-OPC era
During the pre-OPC era, each time a manufacturer developed a new device, much time and labor was spent creating a driver suitable for each application.
This meant that the device manufacturer needed to work with and understand each application system to incorporate the new implementation.
The following figure illustrates the complexity involved. For example, for Device A to work with the three applications shown in the figure (Wonderware Intouch, Citect, and Fix Dmacs), the manufacturer of Device A would need to develop three separate drivers. As you can see, the effort required for this kind of development was complex and costly.
Page
24
MasterBusMMS driver
XWAY driver
Profinetdriver
visualizationhistorydata base
Page
25
5.2 Network Management in the OPC era
In contrast, when manufacturers develop an OPC Server for use with a new product, the standardized OPC communication interface makes it easy for the HMI/SCADA control system to access and manage network or field control devices. Indeed, using OPC puts much less of a burden on the manufacturer, and provides a higher degree of flexibility for the control engineer.
Page
26
5.3 NETWORK COMMUNICATION WITH OPC
ABB AC800M
Télémécanique TSX
Siemens S7
AC800MOPC server
SchneiderOPC server
SiemensOPC server
Operator ITapplication software is written independently from the type of controller
Historian(Informati
on Manager)
MMS XWAY ProfiNet
the drivers still exist,but the clients do notsee them anymore
Page
27
5.4 DIFFERENCE BETWWEN OPC & PRE-OPC OPREABILITY
Before OPC:
With OPC:reduce costmore choicesincrease productivity
costlyinefficientrisky
Performance
Connectivity
Application X
...
DCS ControllerPLC
Application Y
DisplayApplication
TrendApplicationOPC OPC
DCS ControllerPLC
Custom interfaces
InterOperability
Page
28
Chapter 6
OPC TECHNOLOGY
OPC is based on mainly1. OLE- OBJECT LINKING AND EMBEDDING 2. COM/DCOM- COMPONENT OBJECT
MODELLING/DISTRIBUTED COMPONENT OBJECT MODELLING
Transport(TCP-IP, UDP, Queued)
ActiveX
(Distributed) Component Object Model(COM / DCOM)
Object Linking and Embedding (OLE)
Ethernet
OLE for ProcessControl (OPC)
only betweennodes
Page
29
6.1 OLE-OBJECT LINKING & EMBEDDING
Many users of Microsoft’s Office package are familiar with the benefits of OLE without necessarily being aware of the terminology.
In the past, most integrated office suites only offered simple copy and paste – allowing users to copy information from one document or application and paste the information, at the appropriate point. In a second document, the major draw-back of the procedure was that if the original information changed, the procedure had to be continually repeated in order to keep the second documents current. The second drawback was the need to remember the application that created the information and where the files had been put.
One option that overcomes these drawbacks is to create a link between the two files. Now, whenever the data in the source file changes, this change is immediately reflected in all other applications using that data. This is termed dynamic data exchange (DDE).
Another option is to embed the information into the destination document and use the source application’s tools to update the information. The source application can either be launched from within the destination document – with a separate window appearing with the source application showing the information to edit – or the application tools can be embedded.
In this latter option, when a user selects the object to edit, the menu and toolbar change to the source application but the user remains within the document and can see the surrounding text or data. This kind of sharing is termed Object Linking and
Page
30
embedding (OLE) in which an ‘object’ can be text, a chart, table, picture, equation, or any other form of information that is created and edited – usually within an application other than the source application.
The major difference between linking and embedding is that linked (DDE) information is stored in the source document. Thus, the destination contains only a code that supplies the name and location of the source application, document and the portion of the document.
Embedded (OLE) information, on the other hand, is stored in the destination document and the code associated with OLE points to a source application rather than a file.
Object Linking and Embedding (OLE) lets you insert an object into a document and access tools to manipulate the object (usually the tools of the program that were used to create the object).
With OLE, you place objects into documents either by embedding them or by linking them.
Once the source object has been embedded into the destination file any changes occurring in the source object or the parent file will also cause changes to take place in the destination file.
OLE 1.0
OLE 1.0, released in 1990, was the evolution of the original dynamic data exchange, or DDE, concepts that Microsoft developed for earlier versions of Windows. While DDE was limited to transferring limited amounts of data between two running applications, OLE was capable of
Page
31
maintaining active links between two documents or even embedding one type of document within another.
OLE 2.0 OLE 2.0 was the next evolution of OLE, sharing many of the same goals as version 1.0, but was re-implemented on top of the Component Object Model (COM) instead of dynamic data exchange (DDE).
Page
32
6.2 COM/DCOMCOMPONENT OBJECT MODELLING/DISTRIBUTED
COMPONENT OBJECT MODELLING
COM
Component Object Model (COM) is a binary-interface standard for software componentry introduced by Microsoft in 1993. It is used to enable interprocess communication and dynamic object creation in a large range of programming languages. The term COM is often used in the Microsoft software development industry as an umbrella term that encompasses the OLE Automation, ActiveX,COM+ technologies.
A COM Object may reside in a DLL, in which case it is known as an "In-Process" server because it shares address space with the client, or controlling application.
The essence of COM is a language-neutral way of implementing objects that can be used in environments different from the one in which they were created, even across machine boundaries.
Page
33
6.3 DCOM
Distributed Component Object Model (DCOM) is a proprietary Microsoft technology for communication among software components distributed across networked computers. DCOM, which originally was called "Network OLE", extends Microsoft’s COM, and provides the communication substrate under Microsoft’s COM+ application server infrastructure. It has been deprecated in favor of the Microsoft .NET Framework.
The addition of the "D" to COM was due to extensive use of DCE/RPC (Distributed Computing Environment/Remote Procedure Calls).
To access DCOM settings on a computer running Windows 2000, Windows XP and earlier, click Start > Run, and type "dcomcnfg". To access DCOM settings on a computer running Windows Vista or windows 7, click Start, type "dcomcnfg", right click “dcomcnfg.exe” in the list and click “run as administrator”.
Page
34
6.4 DCOM ARCHITECTURE
Client ComponentIn the same process Fast, direct function calls
Client ComponentCOM
Client ProcessServer ProcessOn the same machine
Across machinesSecure, reliable and flexible DCE-RPCbased DCOM protocol
DCOMDCERPCClient
Server MachineClient Machine
DCOM Component
Page
35
Chapter 7
OPC INTERFACE
Data exchange in OPC has two parts
1. One part, which is for exchanging data with controlled devices, implements the corresponding OPC Server for different communication rules specific to the device.
2. The other part is based on the COM/DCOM specification of communication between Client and Server, and consequently needs to be set up as client/server architecture.
In order to accommodate the majority of system developers, many development tools come equipped with an OPC Client function to make the tools easy to learn and use. The OPC specification contains two sets of interface, provided by the software developers, due to the communication capability of industrial controlled equipment.
The OPC Automation interface is intended for use by applications such as VB and Excel script based programs. The OPC Custom interface is intended for use with higher level programming languages, such as C++.
Page
36
7.1 SERVER-CLIENT INTERFACE
The server–client interface takes place by the use of COM/DCOM technology of Microsoft. When in the same PC the technology is COM, where as when client and server are in different systems the technology is DCOM. The physical medium for the interface is Ethernet.
PC HardwareOPC Server
DCOM
PLC
Ethernet
OPC Client
PC HardwareOPC Server
IO-Device
PLC
PC HardwareOPC ClientDCOM
PC-Card,RS 232…
DCOM
PC HardwareOPC Server
COM
PLC
Ethernet
OPC Client
PC HardwareOPC Server
IO-Device
PLC
PC HardwareOPC ClientDCOM
PC-Card,RS 232…
DCOM
Page
37
Chapter 8
OPC SERVER CLASSIFICATION
The OPC server can be classified into three main types1. OPC DA(Data Access)2. OPC AE(Alarm & Events)3. OPC HDA(Historical Data Access)
8.1 OPC DA
OPC DA stands for OPC Data Access. It is an OPC Foundation specification that defines how real-time data can be transferred between a data source and a data sink (for example: a PLC and an HMI) without either of them having to know each other’s native protocol.
Page
38
Why is OPC DA so popular? How is it different than previous protocols?
The OPC DA Client/Server architecture was the first architecture defined by the OPC Foundation. Before OPC DA, vendors’ products (devices, PLCs, HMIs) required any device or applications connecting to them to have a “custom driver” that translated between the third party connection and the product in question. There were many problems associated with custom driver based communications; some of these most common ones were: high cost, proprietary technology that tied users to a particular vendor, hard to configure and maintain because each custom driver had its own way of doing things, hard to keep up-to-date because of the constant release of new devices and applications. In contrast, OPC DA made it possible to connect to any real-time data source without a custom connector written specifically for the data-source/data-sink pair. Hence, reads and writes could be performed without the data-sink having to know the data-source’s native protocol or internal data structure.
Page
39
OPC DA Specification
While the OPC DA specification belongs to the OPC Foundation, it has gone through a number of revisions. The key ones are:
Year Version Comment1996 1.0 Initial specification.1997 DA 1.0a Data Access (DA) name adopted to
differentiate it from other specifications being concurrently developed.
1998 DA 2.0 - DA 2.05a Numerous specification clarifications and modifications.
2003 DA 3.0 Further additions and modifications.
Given there are different versions of the OPC Data Access (OPC DA) specification, the key question is: are these versions backward compatible? For example: can an OPC DA 1.0a client communicate with an OPC DA 3.0 OPC Server? The answer is: depends.
Data Access OPC Client and OPC Server backward compatibility
It is possible and recommended that vendors write OPC Clients and OPC Servers that are backwards compatible however, the reality is that backward compatibility is optional rather than mandatory which means: a number of vendors chose not to follow such advice (and continue to do so) and developed OPC DA Servers that only recognize one or two of the specifications but not all. What this means is that while these non-backward-compatible OPC Servers and OPC Clients still give users the advantage of using OPC… they only work with specific versions of the specification. The good news is that companies like Matirkon OPC not only develop fully backward-compatible OPC Servers, they also offer OPC data management products (ex. OPC Data Manager and OPC Security Gateway) that sit between the non-backward-compatible OPC Clients and OPC Servers to enable them to communicate with each other by translating between OPC DA revisions on the fly.
Page
40
OPC ALARM & EVENTS8.2 Alarm & Events
Events are changes in the process that need to be logged, such as "production start“. Alarms are abnormal states in the process that require attention such as “low oil pressure”.
Page
41
BackgroundToday with the level of automation that is being applied in manufacturing, operators are dealing with higher and higher amounts of information. Alarming and event subsystems have been used to indicate areas of the process that require immediate attention. Areas of interest include (but are not limited to); safety limits of equipment, event detection, abnormal situations. In addition to operators, other client applications may collect and record alarm and event information for later audit or correlation with other historical data.Alarm and event engines today produce an added stream of information that must be distributed to users and software clients that are interested in this information. Currently most alarming/event systems use their own proprietary interfaces for dissemination and collection of data. There is no capability to augment existing alarm solutions with other capabilities in a plug-n-play environment. This requires the developer to recreate the same infrastructure for their products as all other vendors have had to develop independently with no interoperability with any other systems.
determine the exact time of change (time stamping)
categorize by priorities
log for further use
acknowledge alarms(events are not acknowledged)
link to clear text explanation
Page
42
In keeping with the desire to integrate data at all levels of a business (as was stated in the OPC Data background information), alarm information can be considered to be another type of data. This information is a valuable component of the information architecture outlined in the OPC Data specification. Manufacturers and consumers want to use off the shelf, open solutions from vendors that offer superior value that solves a specific need or problem.
PurposeTo identify interfaces used to pass alarm and event information between components which would be suitable to standardization. Additionally this document details the design of those interfaces in such a way as to compliment the existing OPC Data Access Interfaces.
Relationship to Other OPC SpecificationsThis specification complements but is separate from the OPC Data Access and the OPC Historical Data Access specifications. It references the OPC Common specification, in that OPC Event Servers support the interfaces specified there.
ScopeThe scope of this document is to provide a specification for a software “conduit” for alarm and event information to be broadcast from servers to clients. “Conduit” refers to the notion that this document is not intended to specify solutions for alarming problems, but rather provide an enabling technology that will permit multi-vendor solutions to operate in a heterogeneous computing environment.
Multiple Levels of Capability
The OPC Alarms and Event specification accommodates a variety of applications that need to share alarm and event information. In particular, there are multiple levels of capability for handling alarm and event functionality, from the simple to the sophisticated.
Page
43
Types of Alarm and Event ServersThere are several types of OPC Alarm and Event Servers. Some key types supported by this specification are: Components that can detect alarms and/or events and report
them to one or more clients.
Components that can collect alarm and event information from multiple sources (whether by subscribing to other OPC alarm and event servers or by detecting alarms and events on it’s own) and report such information to one or more clients.
Distinctions are made between these two roles because this specification does not overburden simple alarm and event servers, but also facilitates more sophisticated servers. Simpler software components or devices that can detect and report alarms and events, should not have to also perform advanced sorting or filtering operations. In other words, the required server interface is kept simple. It supports the reporting of information but not much more. Thus, simple event servers may choose to restrict the functionality of the event filtering they provide. Also, they may choose to not implement such functions as area browsing, enabling/disabling of conditions, and translation to itemIDs.Optional objects and interfaces are noted in the reference portion of this specification. Similarly, methods which may return E_NOTIMPL, or which may have varying levels of functionality are also noted.
Types of Alarm and Event Clients
Clients for OPC alarm and event servers are typically components that subscribe to and display, process, collect and/or log alarm and event information. The clients of OPC alarms and events servers may include (but are not limited to) : operator stations
event/alarm logging components
event/alarm management subsystems
Page
44
Client – Server Interactions
Figure Interaction between several OPC Alarm and Event Servers and Clients
Figure shows several types of OPC Alarm and Event clients and servers including a Device, SPC Module, Operator Stations, Event Logger, and an Alarm/Event Management subsystem. The arrowhead end of the lines connecting the components indicates the client side of the connection. Notice that there are multiple roles played by some components. The Alarm/Event Management server is also a client to more than one OPC Alarm and Event server. In this model, the Alarm/Event Management server is acting as kind of a collector or data concentrator, providing its clients with perhaps more organized information or a more advanced interface. Unlike the Alarm/Event Management server, the Device and SPC Modules implement the simplest Alarm/Event server interface.
Page
45
OPC HDA
8.3 Historical Data Access
Historical Data are process states and events such as: process variables, operator actions, recorded alarms,... that are stored as logs in a long-term storage for later analysis. OPC HDA (Historical Data Access) specifies how historical data are retrieved from the logsin the long-term storage, filtered and aggregated (e.g. compute averages, peaks).
Background
A standard mechanism for communicating to numerous data sources, either devices on the factory floor, or a database in a control room is the motivation for this specification. The standard mechanism would consist of a standard automation interface targeted to allow Visual Basic applications, as well as other automation enabled applications to communicate to the above named data sources.
Manufacturers need to access data from the plant floor and integrate it into their existing business systems. Manufacturers
Page
46
must be able to utilize off the shelf tools (SCADA Packages, databases, spreadsheets, etc.) to assemble a system to meet their needs. The key is open and effective communication architecture concentrating on data access, and not the types of data. We have addressed this need by architecting and specifying a standard automation interface to the OPC Historical Data Access Custom interface to facilitate the needs of applications that utilize an automation interface to access plant floor data.
Purpose
What is needed is a common way for automation applications to access data from any data source like a device or a database.
The OPC Historical Data Access Automation defines a standard by which automation applications can access process data. This interface provides the same functionality as the custom interface, but in an “automation friendly” manner.
Given the common use of Automation to access other software environments (e.g.: RDBMS, MS Office applications, WWW objects), this interface has been tailored to ease application development, without sacrificing functionality defined by the Custom interface.
The figure below shows an Automation client calling into an OPC Historical Data Access Server using a 'wrapper' DLL. This wrapper translates between the custom interface provided by the server and the automation interface desired by the client. Note that in general the connection between the Automation Client and the automation Server will be 'In Process'
Page
47
while the connection between the Automation Server and the Custom Server may be either In Process, Local or Remote.
Functional Requirements
The automation interface provides nearly all of the functionality of the required and optional Interfaces in the OPC Historical Data Access Custom Interface. If the OPC Historical Data Access Custom server supports the interface, the functions and properties at the automation level will work. Automation interfaces generally do not support optional capabilities in the same way that the custom interface does. If the underlying custom interface omits some optional functionality then the corresponding automation functions and properties will exhibit some reasonable default behavior as described in more detail later in this document.
The interfaces are fully supported by VC++ and Visual Basic 5.0 they allow any application which has an OLE Automation Interface (e.g. VB, VC++, and VBA enabled applications) to access the OPC Interface, according to the limitations of the respective application.
The interface described in this specification specifically does NOT support VBScript or Java Script. A separate wrapper could be developed to accommodate the needs of VBScript and Java Script. However such an effort is outside the scope of this specification.
Page
48
OPC HDA Automation Server Object Model
Page
49
Chapter 9
OPC data hub
9.1 Advanced OPC Tunneling
In today’s process control environment, OPC is becoming the protocol of choice. There are many OPC servers offered by companies specializing in connectivity, and PLC, DCS, and equipment manufacturers often offer an OPC server interface as part of their product suite. This allows software vendors to create OPC client applications that easily access real-time data from any piece of equipment offered by any vendor. Data from the factory floor is more available now than ever before. Accessing this data often means connecting over corporate or public networks.
But networking OPC is challenging. The networking protocol for OPC
Page
50
is DCOM, which was not designed for industrial real-time data transfer. DCOM is difficult to configure, responds poorly to network breaks, and has serious security flaws. Using DCOM between different LANs, such as connecting between manufacturing and corporate LANs, is sometimes impossible to configure. Using OPC over DCOM also requires more network traffic than some networks can handle because of bandwidth limitations, or due to the high traffic already on the system. To overcome these limitations, Cogent Real-Time Systems and their technical partner, Software Toolbox, offer a “tunneling” solution, as an alternative to DCOM, to transfer OPC data over a network. Let’s take a closer look at how tunneling solves the issues associated with DCOM, and how the OPC DataHub provides a secure, reliable, and easy-to-use tunneling solution with many advanced features.
Making Configuration Easy & Secure
The DCOM protocol is difficult to configure. Even the most experienced network administrators can have problems configuring DCOM networking, especially when trying to get the Windows login permissions and security settings to match. Part of the problem is that it
is very hard to find any documentation on DCOM. Even seasoned pros, who have learned the hard way, are challenged when Windows Update resets DCOM or adds new settings that break a working system. Most integrators get around these problems by defining very broad access permissions on all machines involved. In a typical network environment, though, you do not want to configure your computers with loose access permissions. This means using DCOM can actually compromise your network security standards. Keeping your production network on a closed system has historically been one way of protecting it, but with the demands to share data across systems this is becoming less practical. Firewalls are used to protect network-to-network data, but DCOM configuration in these situations is even more difficult to get working.
Tunneling with the OPC DataHub eliminates DCOM usage between PCs and all of its configuration and security issues. The OPC DataHub uses the industry standard TCP/IP protocol to network data between an OPC server on one computer and an OPC client on another computer, thus avoiding all of the major problems associated with using the DCOM protocol.
Page
51
Tunneling data using the OPC DataHub
The OPC DataHub offers this tunneling feature by effectively ‘mirroring’ data from one OPC DataHub running on the OPC server computer, to another OPC DataHub running on the OPC client computer as shown in the image above. This method results in very fast data transfer between OPC DataHub nodes. When a DCOM connection is broken, there are very long timeout delays before either side is notified of the problem, due to DCOM having hard coded timeout periods which can’t be adjusted by the user. In a production system, these long delays without warning can be a very real problem. Some OPC clients and OPC client tools have internal timeouts to overcome
this one problem but this approach does not deal with the other issues discussed in this paper.
The OPC DataHub has a user-configurable heartbeat and timeout feature which allows it to react immediately when a network break occurs. As soon as this happens, the OPC DataHub begins to monitor the network connection and when the link is re-established, the local OPC DataHub automatically reconnects to the remote OPC DataHub and refreshes the data set with the latest values. Systems with slow polling rates over long distance lines can also benefit from the user-configurable timeout, because DCOM timeouts might have been too short for these systems.
Whenever there is a network break, it is important to protect the client systems that depend on data being delivered. Because each end of the tunneling connection is an independent OPC DataHub, the client programs are protected from network failures and can continue to run in isolation using the last known data values. This is much better than having the client applications lose all access to data when the tunneling connection goes down.
Page
52
The OPC DataHub uses an asynchronous messaging system that further protects client applications from network delays. In most tunneling solutions, the synchronous nature of DCOM is preserved over the TCP link. This means that a when a client accesses data through the tunnel, it must block waiting for a response. If a network error occurs, the client
will continue to block until a network timeout occurs. The OPC Data Hub removes this limitation by releasing the client immediately and then delivering the data over the network. If a network error occurs, the data will be delivered once the network connection is re-established.
OPC DataHub Other tunneling productsThe OPC DataHub keeps all OPC transactions local to the computer, thus fully protecting the client programs from any network irregularities.
Other products expose OPC transactions to network irregularities, making client programs subject to timeouts, delays, and blocking behavior. Link monitoring can reduce these effects, while the OPC DataHub eliminates them.
The OPC DataHub mirrors data across the network, so that both sides maintain a complete set of all the data. This shields the clients from network breaks as it lets them
Other products pass data across the network on a point by point basis and maintain no knowledge of the current state of the points in the system. A network break
Page
53
continue to work with the last known values from the server. When the connection is re-established, both sides synchronize the data set.
leaves the client applications stuck with no data to work with.
A single tunnel can be shared by multiple client applications. This significantly reduces network bandwidth and means the customer can reduce licensing costs as all clients (or servers) on the same computer share a single tunnel connection.
Other tunneling products require a separate network connection for each client-server connection. This increases the load on the system, the load on the network and increases licensing costs.
These features make it much easier for client applications to behave in a robust manner when communications are lost, saving time and reducing frustration. Without these features, client applications can become slow to respond or completely unresponsive during connection losses or when trying to make synchronous calls.
Securing the System
Recently, DCOM networking has been shown to have serious security flaws that make it vulnerable to hackers and viruses. This is particularly worrying to companies who network data across Internet connections or other links outside the company.
To properly secure your communication channel, the OPC DataHub offers secure SSL connections over the TCP/IP network. SSL Tunneling is fully encrypted, which means the data is completely safe for transmission over open network links outside the company firewalls. In addition, the OPC DataHub provides access control and user authentication through the use of optional password protection. This ensures that only authorized users can establish tunneling connections. It is a significant advantage having these features built into the OPC DataHub, since other methods of data encryption can require complicated operating system configuration and the use of more expensive server PCs, which are not required for use with the OPC DataHub.
Page
54
Advanced OPC Tunneling
While there are a few other products on the market that offer OPC tunneling capabilities to replace DCOM, the OPC DataHub is unique in that it is the only product to combine tunneling with a wide range of advanced and complimentary features to provide even more added benefits.
Significant reduction in network bandwidth
The OPC DataHub reduces the amount of data being transmitted across the network in a two ways:
1. Rather than using a polling cycle to transmit the data, the OPC DataHub only sends a message when a new data value is received. This significantly improves performance and reduces bandwidth requirements.
2. The OPC DataHub can aggregate both client and server connections. This means that the OPC DataHub can collect data from multiple OPC servers and send it across the network using a single connection. On the client side, any number of OPC clients can attach to the OPC DataHub and they all receive the latest data as soon as it arrives. This eliminates the need for each OPC client to connect to each OPC server using multiple connections over the network.
Combining Tunneling and Aggregation with the OPC DataHub
Non-Blocking
While it may seem simple enough to replace DCOM with TCP/IP for networking OPC data, the OPC DataHub also replaces the inherent blocking behaviour experienced in DCOM communication. Client programs connecting to the OPC DataHub are never blocked from sending new information. Some vendors of OPC tunneling solutions still face this blocking problem, even though they are using TCP/IP.
Page
55
Supports slow network and Internet links
Because the OPC DataHub reduces the amount of data that needs to be transmitted over the network, it can be used over a slow network link. Any interruptions are dealt with by the OPC DataHub while the OPC client programs are effectively shielded from any disturbance caused by the slow connection.
Access to data on network computers running Linux
Another unique feature of the OPC DataHub is its ability to mirror data between OPC DataHubs running on other operating systems, such as Linux and QNX. This means you can have your own custom Linux programs act as OPC servers, providing real-time data to OPC client applications running on networked Windows computers. The reverse is also true. You can have your Linux program access data from OPC servers running on networked Windows computers.
Connecting OPC to Linux using the OPC DataHub
9.2 Load balancing between computers
The OPC DataHub also offers the unique ability to balance the load on the OPC server computers. You may have a system where multiple OPC clients are connecting to the OPC server at the same time, causing the server computer to experience high CPU loads and slower performance. The solution to this is to mirror data from the OPC DataHub on the OPC server computer to an OPC DataHub on another computer and then have some of your OPC clients connect to this second ‘mirrored’ computer. This reduces the load on the original OPC server computer and provides faster response to all OPC client computers.
Page
56
Load Balancing using the OPC DataHub
9.3 Advanced Tunneling Example - TEVA Pharmaceuticals (Hungary)
TEVA Pharmaceuticals in Hungary recently used the OPC DataHub to combine tunneling and aggregation to network OPC data over the network and through the company firewall.
Laszlo Simon is the Engineering Manager for the TEVA API plant in Debrecen, Hungary. He had a project that sounded simple enough. He needed to connect new control applications through several OPC stations to an existing SCADA network. The plant was already running large YOKOGAWA DCS and GE PLC control systems, connected to a number of distributed SCADA workstations. However, Mr. Simon did face a couple of interesting challenges in this project:
The OPC servers and SCADA systems were on different computers, separated by a company firewall. This makes it extremely difficult to connect OPC over a network, because of the
Page
57
complexities of configuring DCOM and Windows security permissions.
Each SCADA system needed to access data from all of the new OPC server stations. This meant Mr. Simon needed a way to aggregate data from all the OPC stations into a single common data set on each SCADA computer.
Chapter 10
OPC APPLICATION CAPACITY
OPC Server Function
Item Application Capacity
Page
58
DA Server A number of clients (A number of server objects) A number of groups (A number of group objects)A number of Item IDsCache update period (Data gathering period)Max. throughput of data access
100 clients1000 groups100000 item IDs/all groups1 to 3600 sec2000 item IDs/sec
A&E Server A number of clients (A number of server objects)Max. number of event-registered objects
100 clients1000 objects
HDA Server A number of clients (A number of server objects)Max. historical data save period
100 clientsNot restricted (Depends server)
Chapter 11
SYSTEM REQUIREMENTS FOR OPC
Minimum System Requirements:
2.0 GHz Processor Speed
Page
59
1 GB installed RAM
180 MB available disk space
Ethernet Card
VGA (800 x 600) or Higher Resolution Video Adapter and Monitor
CD-ROM or DVD Drive
Keyboard and Microsoft Mouse or Compatible Pointing Device.
CONCLUSION
Thus we have studied in detail the hardware & software configuration
of OPC (OBJECT LINKING & EMBEDDING FOR PROCESS
Page
60
CONTROL) and we can conclude from it that OPC is an very important
aspect of the process control industry as it makes process control,
process monitoring and process implementation much more easy. It is
also safe option as it has fire wall protection. OPC can be considered as
the future of the rising process industry.
Glossary
ACL - Access Control List: List of rules specifying access privileges to network resources.
API - Application Programming Interface: The specification of the interface an application must invoke to use certain system features.
CATID - Category Identifier: Specifies the active OPC specifications.
Page
61
CCM - Component Category Manager: A utility that creates categories, places components in specified categories, and retrieves information about categories.
CIFS - Common Internet File System: Updated version of SMB. CIP - Common Industrial Protocol: CIP is an open standard for
industrial network technologies. It is supported by an organization called Open Device Net Vendor Association (ODVA).
COM – Component Object Model: Microsoft’s architecture for software components. It is used for inter-process and inter-application communications. It lets components built by different vendors be combined in an application.
CLSID - Class Identifier: An identifier for COM objects. CORBA - Common Object Request Broker Architecture:
Architecture that enables objects, to communicate with one another regardless of the programming language and operating system being used.
CSP - Client Server Protocol: An Allen-Bradley protocol used to communicate to PLCs over TCP/IP.
DDE – Dynamic Data Exchange: A mechanism to exchange data on a Microsoft Windows system.
DCOM – Distributed Component Object Model: This is an extension to the Component Object Model that Microsoft made to support communication among objects on difference computers across a network.
DCS – Distributed Control System: A Distributed Control System allows for remote human monitoring and control of field devices from one or more operation centers.
DDE - Dynamic Data Exchange: An inter-process communication system built into Windows systems. DDE enables two running applications to share the same data.
DLL - Dynamic Link Libraries: A file containing executable code and data bound to a program at load time or run time, rather than during linking.
DMZ - Demilitarized Zone: A small network inserted as a "neutral zone" between a trusted private network and the outside untrusted network.
DNP3 - Distributed Network Protocol 3: A protocol used between components in process automation systems.
DNS – Domain Name System: A distributed database system for resolving human readable names to Internet Protocol addresses.
Page
62
EN - Enterprise Network: A private communication network of a firm.
ERP - Enterprise Resource Planning: Set of activities a business users to manage its key resources.
GUI - Graphical User Interface: Graphical, as opposed to textual, interface to a computer.
GUID - Globally Unique Identifier: A unique 128-bit number that is produced by the Windows operating system and applications to identify a particular component, application, file, database entry or user.
HMI - Human Machine Interface: This interface enables the interaction of man and machine.
HTML - Hypertext Markup Language: The authoring software language used on the Internet's World Wide Web.
HTTP - HyperText Transfer Protocol: The protocol used to transfer Web documents from a server to a browser.
HTTPS - HyperText Transfer Protocol over SSL: A secure protocol used to transfer Web documents from a server to a browser.
IIS - Internet Information Server: Microsoft’s web server. IDL - Interface Definition Language: Language for describing the
interface of a software component. IDS - Intrusion Detection System: A system to detect suspicious
patterns of network traffic. IPX - Internetwork Packet Exchange: A networking protocol used
by the Novell Incorporated.
IPSEC – Internet Protocol Security: An Internet standard providing security at the network layer.
IP - Internet Protocol: The standard protocol used on the Internet that defines the datagram format and a best effort packet delivery service.
I/O - Input/output: An interface for the input and output of information.
ISA - Instrumentation, Automation and Systems Society: ISA is a nonprofit organization that helps automation and control professionals to solve technical instrumentation problems.
IT - Information Technology: The development, installation and Implementation of applications on computer systems.
Page
63
LAN - Local Area Network: A computer network that covers a small area.
LM - LAN Manager: An old Microsoft Windows authentication protocol.
LDAP - Lightweight Directory Access Protocol: Protocol to access directory services.
MBSA - Microsoft Baseline Security Analyzer: A tool from Microsoft used to test a system to see if Microsoft best practices are being used.
MIB - Management Information Base: The database that a system running an SNMP agent maintains.
MODBUS - A communications protocol designed by Modicon Incorporated for use with its PLCs.
OLE – Object Linking and Embedding: A precursor to COM, allowing applications to share data and manipulate shared data.
OPC – OLE for Process Control: A standard based on OLE, COM and DCOM, for accessing process control information on Microsoft Windows systems.
OPC-A&E - OPC Alarms & Events: Standards created by the OPC Foundation for alarm monitoring and acknowledgement.
OPC-DA - OPC Data Access OPC-DA: Standards created by the OPC Foundation for accessing real time data from data acquisition devices such as PLCs.
OPC-DX - OPC Data Exchange: Standards created by the OPC Foundation to allow OPC-DA servers to exchange data without using an OPC client.
OPC-HDA - OPC Historical Data Access: Standards created by the OPC Foundation for communicating data from devices and applications that provide historical data.
OPC-UA - OPC Unified Architecture: A standard being created by the OPC Foundation to tie together all existing OPC technology using the .NET Architecture.
OPC XML-DA - OPC XML Data Access: Standards created by the OPC Foundation for accessing real time data, carried in XML messages, from data acquisition devices such as PLCs.
PLC – Programmable Logic Controller: A PLC is a small dedicated computer used for controlling industrial machinery and processes.
Page
64
PCN - Process Control Network: A communications network used to transmit instructions and data to control devices and other industrial equipment.
PROGID - Program Identifier: A string that identifies the manufacturer of an OPC server and the name of the server.
RPC – Remote Procedure Call: A standard for invoking code residing on another computer across a network.
RSLinx Software providing plant floor device connectivity for a wide variety of applications.
SCADA – Supervisory Control And Data Acquisition: A system for industrial control consisting of multiple Remote Terminal Units (RTUs), a communications infrastructure, and one or more Control Computers.
SID – Security Identifier: A unique name that is used to identify a Microsoft Windows object.
SP - Service Pack: A bundle of software updates. SPX - Sequenced Packet Exchange: A transport Layer protocol
used by Novell Incorporated. SMB - Server Message Block: A Microsoft network application-
level protocol used between nodes on a LAN. SNMP - Simple Network Management Protocol: A protocol used
to manage devices such as routers, switches and hosts. SOAP - Simple Object Access Protocol: A protocol for exchanging
XML based messages using HTTP. SSL - Secure Socket Layer: A de facto standard for secure
communications created by Netscape Incorporated. TCP - Transmission Control Protocol: The standard transport level
protocol that provides a reliable stream service. UDP - User Datagram Protocol: Connectionless network transport
protocol.
URL - Uniform Resource Locator: The address of a resource on the Internet.
WS-Security - Web Services Security: A communications protocol providing a means for applying security to Web Services.
XML - extensible Markup Language: A general-purpose markup language for creating special purpose markup languages that are capable of describing many different kinds of data.
Page
65
PROJECT REPORT
HEART RATE MONITORING SYSTEM USING MIRCO-CONTROLLER
Page
66
PREPARED BY: ABHIMANYU AMBEGAONKAR(07-ICG-60)
SUMANT SHARMA (07-ICG-61)
INTERNAL GUIDE: MR. DIPESH SHAH
PREFACE
Heart Rate monitors are devices that allow the user to gain a real time measurement of their heart beat. They consist of a transmitter in the form of a strap and a receiver, usually worn on the wrist or fingers. The strap transmitter measures the number of times the user’s heart beats per
Page
67
minutes by monitoring voltages across the heart through strap which are in contact with the skin. As a heart beat is detected the transmitter sends a radio signal to the receiver which is used to determine the rate at which the heart beats.
Heart rate monitors should not be confused with the clinical device used by medical professionals. Personal heart rate monitoring devices are more convenient, less bulky, and lightweight, allowing for outdoor usage.
Many individuals use heart rate monitors to determine the efficiency of their training. During the mid-1970s, the first concept about the heart rate monitor was conceived by a Finnish professor Seppo Säynäjäkangas, who thought of a way to accurately record heart rates of the Finnish National Cross Country Ski team during training. By the year 1977, professor Säynäjäkangas worked on the idea and developed the gadget giving birth to Polar Electro that became a leading brand in heart monitoring equipment. Over the years, a number of companies began manufacturing heart rate monitors, evolving the simple device that detects heartbeats and incorporating features such as calorie expenditure and fitness exercise diary. Some models detect breathing rate and vital signs related to an individual’s cardiovascular fitness.
TABLE OF CONTENTS
1. Introduction ………………………………………………………………..661.1 introduction……………………………………………………………66
2. Block diagram and its description…………………………………………672.1 block diagram………………………………………………………….672.2 description of block diagram…………………………………………..682.3 led and microcontroller block diagram………………………………..69
Page
68
3. Schematic diagram…………………………………………………………703.1 schematic diagram……………………………………………………..70
4. Hardware design……………………………………………………………714.1 circuit design…………………………………………………………...714.2 working of the system…………………………………………………724.3 Packaging of the system………………………………………………..734.4 Concept of the heart rate monitor system………………………………754.5 Main components of the system………………………………………..76
4.6 types of detection systems…………………..……………………….....805. Software design…………………………………………………………….....81 5.1 flowchart………………………………………………………………...81 5.2 algorithm……………………….………………………………………..82
6. Testing and its result……………………….………………………………….83 6.1 testing ……………………………………………………………………83 6.2 result……………………………………………………………………...83
7. Future expansion……………………………………………………………..84 7.1 strength training guidance………………………………………………...84 7.2 stay in touch with your health and local environment……………………84 7.3 PM4 most advanced monitor……………………………………………...85 7.4 extension of heart rate monitor……………………………………………86 7.5 smartphone expansion…………………………………………………….878. Application,advantagesand disadvantages……………………………………88 8.1 applications………………………………………………………………..88 8.2 advantages………………………………………………………………...89 8.3 disadvantages……………………………………………………………...90 8.4 challenges…………………………………………………………………919. Conclusion…………………………………………………………………….92
10.Bibliography………………………………………………………………….93 11.Appendix……………………………………………………………………..94
1.1 INTRODUCTION
Heart rate measurement is one of the very important parameters of the human cardiovascular system.The heart rate of a healthy adult at rest is around 72 beats per minute (bpm).Babies have a much higher heart rate at around 120 bpm, while older children have heart rates at around 90 bpm.
Page
69
A heart rate monitor is a personal monitoring device that allows a subject to measure their heart rate in real time.
Hence to analyze the proper heart rate we are developing a software program to control the PIC (Programmable interface controller) 16F84. The PIC16F84 belongs to the mid-range family of the PIC micro microcontroller devices.
When the subject that is the finger of the person is kept in the slot, the infrared LEDs detect the subject and the output is sent to the sensor. Now the analog signal from the sensor is sent to various amplifier stages having a supply of 9v, and the amplified analog output is received. This output is then fed to the PIC, who has a supply of 5v. The operating frequency of 4MHZ is provided to the PIC by crystal oscillator. The digital output, the resulting heart rate, obtained from PIC is shown with help of the display.
Thus we have designed a system in order to analysis the heart rate accurately.
2.1 BLOCK DIAGRAM
Page
70
I/P LIGHT SOURCE/SENSOR
ANALOG I/PPIC
CONTROLLER
2.2 DESCRIPTION OF BLOCK DIAGRAM
BLOCK 1:Block 1 consists of the input of the system, the sensor, BPW50. It comprises of detector and a led as a input source. When the
1ST STAGEAMPLIFIER
9V 5V POWER SUPPLY
BUFFER3RD STAGEAMPLIFIER
2ND STAGEAMPLIFIER
DISPLAY
4 MHZ CRYSTAL OSCILLATOR
DIGITAL O/P
Page
71
subject that is the finger of the person is kept in the slot, the infrared LEDs detect the subject and the output is sent to the sensor.
BLOCK 2:The block 2 consists of a series of amplifying stages. It has 1 stage amplifier, 2nd stage amplifier, 3rd stage amplifier and a buffer. To implement the above mentioned stage IC TL074 and TL076 are used. The analog signal from the sensor is sent to various amplifiers and filtering stages having a supply of 9v, and the amplified analog output is received.
BLOCK 3:Block 3 consists of a combination of 5v and 9 v, which is supplied to the various amplifying stages and PIC. As well as to display and input block.
BLOCK 4:This output is then fed to the PIC, which has a supply of 5v. The operating frequency of 4MHZ is provided to the PIC by crystal oscillator. The program stored in the PIC controller then plays a role in determining the output according to the input conditions.
BLOCK 5:The output from the PIC is sent to the display in order show the heart beat rate accurately. The digital output, the resulting heart rate, obtained from PIC is then shown with help of the display.
2.3 LED AND MICROCONTROLLER BLOCK DIAGRAM SYSTEM:
Page
72
3.1 SCHEMATIC DIAGRAM
Page
73
4.1
Page
74
4.1 CIRCUIT DESIGN
4.2 WORKING OF THE SYSTEM
Page
75
Basically, the device consists of an infrared transmitter LED and an infrared sensor photo-diode. The transmitter-sensor pair is clipped on one of the fingers of the subject as shown in the figure.
The LED emits infrared light to the finger of the subject. The photo-diode detects this light beam and measures the change of blood volume through the finger artery. This signal, which is in the form of pulses is then amplified and filtered suitably and is fed to a low-cost microcontroller for analysis and display.
The microcontroller counts the number of pulses over a fixed time interval and thus obtains the heart rate of the subject.
Several such readings are obtained over a known period of time and the results are averaged to give a more accurate reading of the heart rate.
The calculated heart rate is displayed on an LCD in beats-per-minute in the following format:
o Rate = nnn bpm
Where nnn is an integer between 1 and 999.
4.3 PACKAGING OF THE SYSTEM
Page
76
Page
77
Page
78
4.4 CONCEPT OF THE HEART RATE MONITOR SYSTEM:
The heart is one of the most vital organs within the human body. It acts as a pump that circulates oxygen and nutrient carrying blood around the body in order to keep it functioning.
The circulated blood also removes waste products generated from the body to the kidneys. When the body is exerted the rate at which the heart beats will vary proportional to the amount of effort being exerted.
By detecting the voltage created by the beating of the heart, its rate can be easily observed and used for a number of health purposes.
Page
79
4.5 MAIN COMPONENTS OF THE SYSTEM
4.5.1 Sensor-BPW50 :
BPW50 is a high speed and high sensitive PIN photodiode in a flat side view plastic package. Due to its water clear epoxy is sensible to visible and infrared radiations. The large active area combined with flat case gives a high sensitivity at a wide viewing angle .
It incorporates daylight filters which provides sensitivity to infrared radiations only, with the rejection of radiation having wavelength less than 700 nm. The device has low junction capacitance and thus has fast switching speed.
Page
80
4.5.2 Programmable Interface Controller-16F84A
The PIC 16f84A belongs to the mid range family of PIC micromicrocontroller devices. The architecture used in PIC is Harvard Architecture. Harvard Architecture uses two memories and separate buses.
The program memory contains 1K words, which translates to 1024 instructions, since each 14-bit program memory word is the same width as each device instruction. The data memory (RAM) contains 68 bytes. Data EEPROM is 64 bytes.
There are also 13 I/O pins that are user-configured ona pin-to-pin basis.
Some pins are multiplexed with otherdevice functions.
These functions include:
o External interrupto Change on PORTB interruptso Timer0 clock input
The voltage range is from 2.0v to 5.5v.
Page
81
4.5. 3 Ic-TL074:
The TL074 is high speed J–FET input quad operational amplifiers incorporating well matched, high voltage J–FET and bipolartransistors in a monolithic integrated circuit.
The devices feature high slew rates, low input biasand offset currents, and low offset voltage temperaturecoefficient.
4.5.4 Displays
0.52 inch DIGIT HEIGHT
Continuous uniform segments Low power requirements Excellent character appearance Wide viewing angle High contrast and high brightness This device utilizes bright red LED chips, which are made from
transparent GaP substrate, and has gray face and white segments
Page
82
4.5.5 Regulators
o The 78XX series of three-terminal positive voltage regulators employ built-in current limiting, thermal shutdown, and safe-operating area protection which makes them virtually immune to damage from output overloads.
o With adequate heat sinking, they can deliver in excess of 0.5A output current. Typical applications would include local (on-card) regulators which can eliminate the noise and degraded performance associated with single-point regulation.
Page
83
4.6 TYPES OF DETECTION SYSTEM
There are three main type of pulse detection system depending upon the position of their use. 1. Finger probe
2. Ear probe
3. Wrist probe
Photodiode top
Finger detection
2 LEDs Ear detection
Wrist detection
Page
84
5.1 Flowchart of the program
START
Count the pulses in 15 sec and then multiply by 4
Check If Set Mode
Is On
Check if greater than
the specified
Y
N
Display The Heart Beats On The LED
END
Y
Buzzer Is On
N
Page
85
5.2 Algorithm of the program
The algorithm can be as follows :
1.Start
2.Set up the internal timer
3.Enable the internal interrupt int_RTCC (timer 0)
4.Enable external and global interrupts
5.Initialize the timer to start counting from the value 0
6.Set the timer for time interval 66ms
7.Repeat the step 225 times(15 seconds)
8.count the no. of external pulses in the external subroutine
9.Multiply the value by 4 to get the pulse rate.
Page
86
6.1 TESTING
After implementing the circuit on the PCB layout, the input section was flawless. The input section comprises of the sensor, amplifying stages and a buffer.
Page
87
FUTURE EXPANSION
7.1 Strength Training Guidance
For fitness enthusiasts who want to improve strength and cardio. Guides your strength training with heart rate based recovery
periods between sets Creates a training program based on your personal goals and sets
new weekly training targets The Polar Fitness test measures your aerobic fitness at rest and
tells you your progress Comes with Polar Flow Link for effortless data
transfer to your online training diary at polarpersonaltrainer.com via both Mac and PC
7.2 Stay in touch with your health and local environment
To help make you more aware of your health and local environmental conditions, the Nokia Eco Sensor Concept will include a separate, wearable sensing device with detectors that collect environment, health, and/or weather data.
You will be able to choose which sensors you would like to have inside the sensing device, thereby customizing the device to your needs and desires. For example, you could use the device as a “personal trainee” if you were to choose a heart-rate monitor and motion detector (for measuring your walking pace).
Page
88
7.3 PM4 is our most advanced performance monitor
It includes all of the features below:
Five display options: All data, force curve, rowing with a pace boat, bar chart and large print.
Row with an animated rower to learn technique
Review past workout resultsChoose from multiple language options
Heart Rate Monitoring: Built in wireless compatibility with Suunto heart rate technology (chest belt provided), offers improved transmission and eliminates interference from nearby rowers. Also compatible with Polar heart rate technology, if optional Polar receiver is installed.
Log Card: Removable Log Card stores workout data and personal preferences. One Log Card is included with each PM4.
USB Interface: Easily transfer data to your computer.
Page
89
7.4 Extension of heart rate monitor as pulse oximeter and its future expansion
Page
90
7.5 Smart phone expansion
iPhone is unlike any other smart phone. Not only it lets you make phone calls and have a lot of fun with the apps available on iTunes, you can also use it to become more healthy and even lose weight. There are lots of cool applications to choose from in the health and fitness categories. Take heart rate monitors. You can not only track/monitor your heart rate with these apps but also how much you are burning on an everyday basis. A great way to get on the right track and stay on it.
Here are the 23 best heart rate monitor apps for iPhone:
Page
91
8.1Applications
In addition to heart rate (HR) responses to exercise, research has recently focused more on heart rate variability (HRV). Increased HRV has been associated with lower mortality rate and is affected by both age and sex. During graded exercise, the majority of studies show that HRV decreases progressively up to moderate intensities, after which it stabilizes. The duration of the training programmes might be
one of the factors responsible for the versatility of the results.
HRMs are mainly used to determine the exercise intensity of a training session or race. Compared with
other indications of exercise intensity, HR is easy to monitor, is relatively cheap and can be used in most situations.
A new patent application by Apple describes an “integrated sensor for detecting a user’s cardiac activity” embedded into an electronic device — presumably an iPhone, iPad or iPod touch. The sensor, as Apple describes it, could be completely hidden from view, and the “electrical signals generated by the user can be transmitted from the user’s skin through the electronic device housing to the leads.”
Having a circuit extension in heart rate monitor, we get pulse oximeter, a medical device that indirectly monitors the oxygen
Page
92
saturation of a patient's blood (as opposed to measuring oxygen saturation directly through a blood sample) and changes in blood volume in the skin, producing a photoplethysmograph
8.2 Advantages Heart rate monitors and the use of heart rate controls have become
a very popular option on elliptical trainers as well as treadmills.
To get an optimum workout it is important to pace your exercise.
You want your heart rate at the proper intensity level for an
extended period of time. If your heart rate gets too high your
activity can become counter productive, if it is too low you are not
getting any substantial health benefits.
Doctor can monitor patient from remote places.
Continuous monitoring is possible.
In case of abnormal condition quick action can be taken.
HR is easy to monitor, is relatively cheap and can be used in most
situations.
In addition, HR and HRV could potentially
play a role in the prevention and detection of
overtraining.
Page
93
8.3 Disadvantages
Here finger probe is a problem because it cannot be plugged into baby’s soft finger. Hence can only be used for adults.
Finger sensor heart rate monitor is less reliable than ECG (electrocardiogram).
Things that are to be kept in mind before using it:- Pulse monitor depends on the thickness of the patient's finger.- Patient should have no circulation problems in hands and arms.- Patient should feel warm, because if he's cold, the capillaries in his fingers will contract.
One of the major problem with transmission type is "Low perfusion" which arises in post traumatic stress situation (after surgery or other trauma) when the peripheral blood circulation is greatly reduced. To address this problem some have to implement REFLECTIVE type pulse meter on artery close to the heart.
Page
94
There are several common formulas for making estimation of heart rate, one being 220 minus your age. A monitor that you buy might use that or some other formula.
But for any individual, it is quite likely wrong, by as much as plus or minus 12 beats per minute. That means the zones set up by a monitor might push you too hard or not hard enough. It is therefore important to take it easy at first. See if the zones set by the monitor square with how you feel.
8.4 CHALLENGES
The main challenges include amplifying the desired weak signal
in the presence of noise from other muscles and electrical sources.
Noise from the environment will easily swamp the tiny pulse
signal from the heart. The leads connecting the electrode to the
amplifier will act like an antenna which will inadvertently receive
unwanted radiated signals. Such signals are for example the 50Hz
from power lines and emf’s from fluorescent lights will add a tiny
sinusoidal wave which is generally quite difficult to filter away.
Muscles other than the heart also produce voltage potentials and
these can also be detected although the large relative size and
regularity of the heart muscles help to differentiate it from the rest.
Page
95
9. CONCLUSION
A HEART RATE MONITOR is a personal monitoring device that
allows a subject to measure their heart rate in real time. Thus we have
designed a system in order to analyze the heart rate accurately. This
implementation of a heart monitor involves low cost amplifier and filter
components coupled with a sophisticated microcontroller and LCD
screen. Because the device is most useful if it is portable it was
designed with use of one or two 12V batteries. The amplifier and filter
stage of the implementation were successful with a heart rate monitor
successfully detected the input. In doing this the output voltage was
found to be strongly related to the quality of contact between the
detector and the skin and was observed to be highly variable. The
variability of the voltage output made this approach unfeasible. Using a
Page
96
fixed signal made demonstration of this part of the circuit possible. This
project successfully implemented the Digital Heart Rate counter. The
weak signal heart rate signal was amplified in the presence of noise
from other muscles and electrical sources but we were unable to create
an integrated device which could take this signal and calculate the heart
rate accurately.
BIBLIOGRAPHY
http://www.avrfreaks.net
http://en.wikipedia.org/
http://www.microchip.com
http://www.alldatasheets.com/
http://www.ccsinfo.com
http://www.google.co.in/images
http://www.mendeley.com
http://www.ellipticaltrainers.com
http://www.livestrong.com
http://www.concept2.com
Page
97
http://www.nokia.com/environment
http://www.siliconchip.com.au/
http://www.datasheetarchive.com/
http://en.wikipedia.org/wiki/Pulse_oximeter
11. APPENDIX
Page
98
Page
99
Page
100
Page
101
Page
102
Page
103
Page
104
Page
105
Page
106
Page
107
Page
108
Page
109
Page
110
Page
111
Page
112
Page
113
Page
114
High Performance RISC CPU Features: Only 35 single word instructions to learn
All instructions single-cycle except for program branches which
are two-cycle
Operating speed: DC - 20 MHz clock input
DC - 200 ns instruction cycle
1024 words of program memory
68 bytes of Data RAM
64 bytes of Data EEPROM
14-bit wide instruction words
8-bit wide data bytes
15 Special Function Hardware registers
Eight-level deep hardware stack
Direct, indirect and relative addressing modes
Four interrupt sources:
External RB0/INT pin
TMR0 timer overflow
PORTB<7:4> interrupt-on-change
Data EEPROM write complete
Peripheral Features:
13 I/O pins with individual direction control
High current sink/source for direct LED drive
25 mA sink max. per pin
25 mA source max. per pin
TMR0: 8-bit timer/counter with 8-bit programmable prescaler
Page
115
Special Microcontroller Features:
10,000 erase/write cycles Enhanced Flash Program memory
typical
10,000,000 typical erase/write cycles EEPROM Data memory
typical
EEPROM Data Retention > 40 years
In-Circuit Serial Programming via two pins
Power-on Reset (POR), Power-up Timer (PWRT), Oscillator
Start-up Timer (OST)
Watchdog Timer (WDT) with its own On-Chip RC Oscillator for
reliable operation
Code protection
Power saving SLEEP mode
Selectable oscillator options
Page
116
Fig.10.4 PIC16F84A.
Page
117
DC CHARACTERISTICS
Page
118
CLKOUT AND I/O TIMINGS
RESET, WATCHDOG TIMER, OSCILLATOR START UP TIMER AND POWER UP TIMER TIMING