service techmagazine

46
$9.99USD $9.69CAD ¤6.99 Issue September/October www.servicetechmag.com API GOVERNANCE AND MANAGEMENT BY LONGJI TANG , MARK LITTLE LXXXVI Security and Identity Management Applied to SOA - Part II by Jose Luiz Berg A Look at Service-Driven Industry Models by Thomas Erl, Clive Gee, Jürgen Kress, Berthold Maier, Hajo Normann, Pethuru Cheliah, Leo Shuster, Bernd Trops, Clemens Utschig-Utschig, Philip Wik, Torsten Winterberg

Upload: bootcamp-scl

Post on 29-Jun-2015

191 views

Category:

Technology


3 download

DESCRIPTION

Revista de Servicios

TRANSCRIPT

Page 1: Service Techmagazine

$9.99USD $9.69CAD ¤6.99

IssueSeptember/October

www.servicetechmag.com

API GOVERNANCE AND MANAGEMENT BY LONGJI TANG , MARK LITTLE

LXXXVI

Security and Identity Management Applied to SOA - Part II by Jose Luiz Berg

A Look at Service-Driven Industry Modelsby Thomas Erl, Clive Gee, Jürgen Kress, Berthold Maier, Hajo Normann, Pethuru Cheliah, Leo Shuster, Bernd Trops, Clemens Utschig-Utschig, Philip Wik, Torsten Winterberg

Page 2: Service Techmagazine

2 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

ContentsPUBLISHERArcitura Education Inc.

EDITORThomas Erl

COPY EDITORNatalie Gitt

SUPERVISINGPRODUCTION MANAGERIvana Lee

COVER DESIGNJasper Paladino

WEB DESIGNJasper Paladino

CONTRIBUTORSJose Luiz Berg Thomas ErlClive GeeJürgen KressMark LittleBerthold MaierHajo NormannPethuru RajLeo ShusterLongji TangBernd TropsClemens Utschig-UtschigPhilip WikTorsten Winterberg

3 From the Editor

API Governance and Management by Longji Tang, Mark Little

A Look at Service-Driven Industry Modelsby Jose Luiz Berg

Security and Identity Management Applied to SOA - Part IIby Thomas Erl, Clive Gee, Jürgen Kress, Berthold Maier, Hajo Normann, Pethuru Raj, Leo Shuster, Bernd Trops, Clemens Utschig-Utschig, Philip Wik, Torsten Winterberg,

Contributors

51729

36

Page 3: Service Techmagazine

3 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

From the Editor

Big Data technology and practices are becoming increasingly relevant to IT enterprises. Many are discovering the extent to which traditional

data analysis and data science techniques have formed the foundation for what Big Data has become in terms of a professional field of practice.

But what consistently distinguishes Big Data are orders of magnitude to which those established techniques now need to be utilized and the sometimes extreme conditions under which massive volumes of data

need to be processed. These and other necessities brought about by Big Data processing demands have led to further layers of innovation in both

practice and technology that have built upon traditional data science foundations.

Thomas Erl

Page 4: Service Techmagazine

www.arcitura.com/workshops

Q4 2014

Certified Big Data Scientist

Certified Big Data Science Professional

Certified SOA Architect

Certified Cloud Technology Professional

Certified SOA Architect

Certified Cloud Virtualization Specialist

Certified Cloud Professional

Certified Cloud Architect

Certified Big Data Science Professional

October 20-24, 2014

London, UK

December 3-5, 2014

Santa Clara, CA, United States

November 3-7, 2014

Toronto, ON, Canada

October 27-29, 2014

Lagos, Nigeria

December 8-12, 2014

Melbourne, VIC, Australia

November 10-12, 2014

Santa Clara, CA, United States

October 28-29, 2014

Petaling Jaya, Malaysia

December 15-19, 2014

Las Vegas, NV, United States

November 17-19, 2014

Las Vegas, NV, United States

Workshop C

alendarCloud Architect CertificationOctober 6-10, 2014London, UK

SOA Security Specialist CertificationOctober 13-17, 2014Brasília, Brazil

Big Data Science Professional CertificationOctober 16, 23, 30, November 6, 13, 20Hong Kong, Hong Kong

Cloud Virtualization Specialist CertificationOctober 20-22, 2014Rio de Janeiro, Brazil

Cloud Architect CertificationOctober 20-24, 2014Sydney, NSW, Australia

Big Data Scientist CertificationOctober 20-24, 2014London, UK

Cloud Technology Professional CertificationOctober 27-29, 2014Lagos, Nigeria

Cloud Architect CertificationOctober 27-31, 2014Dallas, TX, United States

Cloud Professional CertificationOctober 28-29, 2014Petaling Jaya, Malaysia

Big Data Scientist CertificationNovember 3-5, 2014Virtual (PST)

SOA Architect CertificationNovember 3-7, 2014Toronto, ON, Canada

Cloud Virtualization Specialist CertificationNovember 10-12, 2014Santa Clara, CA, United States

SOA Architect CertificationNovember 10-14, 2014Munich, Germany

Cloud Architect CertificationNovember 10-14, 2014Melbourne, VIC, Australia

SOA Architect CertificationNovember 10-14, 2014Bangalore, India

Big Data Science Professional CertificationNovember 17-19, 2014Las Vegas, NV, United States

Cloud Technology Professional CertificationNovember 17-19, 2014Fairfax, VA, United States

Big Data Science Professional CertificationNovember 17-19, 2014Dallas, TX, United States

SOA Consultant CertificationNovember 17-21, 2014Virtual (PST)

SOA Consultant CertificationNovember 17-21, 2014Bangkok, Thailand

Cloud Technology Professional CertificationNovember 24-26, 2014Sydney, NSW, Australia

Cloud Technology Professional CertificationNovember 24-26, 2014Chennai, India

Cloud Architect CertificationNovember 24-28, 2014Naarden, Netherlands

Cloud Technology Professional CertificationDecember 1-3, 2014Naarden, Netherlands

Cloud Architect CertificationDecember 1-5, 2014Virtual (PST)

Big Data Scientist CertificationDecember 1-5, 2014Las Vegas, NV, United States

SOA Architect CertificationDecember 1-5, 2014Las Vegas, NV, United States

Cloud Architect CertificationDecember 1-5, 2014Dallas, TX, United States

SOA Architect CertificationDecember 7-11, 2014Dubai, UAE

Cloud Professional CertificationDecember 8-9, 2014Petaling Jaya, Malaysia

Cloud Technology Professional CertificationDecember 8-10, 2014Lagos, Nigeria

Big Data Science Professional CertificationDecember 8-10, 2014Bangalore, India

Big Data Consultant CertificationDecember 8-12, 2014Virtual (PST)

SOA Architect CertificationDecember 8-12, 2014Melbourne, VIC, Australia

SOA Architect CertificationDecember 14-18, 2014Riyadh, Saudi Arabia

SOA Architect CertificationDecember 15-19, 2014Virtual (PST)

Cloud Architect CertificationDecember 15-19, 2014Las Vegas, NV, United States

Cloud Storage Specialist CertificationJanuary 5-7, 2015Fairfax, VA, United States

Cloud Architect CertificationJanuary 12-16, 2015Toronto, ON, Canada

Page 5: Service Techmagazine

5 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

API Governance and Management by Longji Tang, Professor, Hunan University, Mark Little, RedHat, UK & Computer Science of Newcastle University, UK

Abstract: We live in an era of service computing with cloud computing platforms, social computing, and mobile computing. One of the most significant characteristics of the era is that any device connects to any service and any service connects to any data with a cost-effective way. The connection between device and service as well as between service and data is built by modern Web APIs. The shift is not only for using software in particular business, but also for engaging other business and people - internal developers, partners, customers, and the world at large, through exposing software interfaces by APIs. The trend is creating a new business reality - API Economy. It is leading an evolution of the traditional SOA paradigm to cloud-enabled, social-enabled, and mobile-enabled modern lightweight SOA. There is increasing automation of processes, transactions, and distribution across many industry sectors and organizations. This paper describes the API Economy and the emergence of API management, its building blocks, its role in service infrastructure. Moreover, API-central architecture patterns, its reference architecture, and its deployment topologies can be found in a newly coming book Service Infrastructure.

Emergence of API Management

The Application Programming Interface (API) is an old technology, which has been around for decades, the rise of Web APIs, which includes new majority REST APIs, traditional SOAP-based APIs, and other, lead APIs technology for building mash-up applications, getting data and services to mobile applications, and connecting enterprises to their partners and cloud services. APIs have started their new life in modern elastic, social, mobile world. With the modern Web APIs dramatically growing, and high availability through the internet, increasingly business values, and becoming more and more important as the application landscape of enterprises, APIs quality (security, performance, availability, …) and risk from exposing data and services by using open APIs become main concerns to enterprises. Thus, API management is becoming a very important core component in modern service infrastructure. In this section, the rise, development, and importance of API management are described and discussed. Although API management is a newly defined term, we will see API management is just an extension of SOA Management and provides new technologies and architectural principles, such as developer portal, Key Management, and metering as well as billing facilities that SOA management does not cover. API management is shaping the multi-channel and multi-tenant strategy cross-organizational boundaries.

API Economy

APIs have been around in hardware and software computing infrastructure for several decades. It has been used as an important component in software systems for specifying how software components or systems should interact with each other, such as, Microsoft Windows API or the Java Enterprise Edition API. However, modern Web APIs are creating business miracle and changing IT landscape. Figure 1 shows you a history of various popular APIs. The modern Web API is not generated from standards, like SOAP APIs, but innovated by modern technology – cloud, mobile and social computing innovators and by the HTTP standard. Modern APIs started around 2000 when saleforce.com officially launched its web-based, enterprise-class, and API-enabled

Page 6: Service Techmagazine

6 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

automation called SaaS today, rising dramatically from 2008, and continuing to grow.

Figure 1 – Modern API Milestone

The API is continuing to grow with industry broadly adopting REST APIs. The API Economy has been formed in terms of both API technology advantages and business innovation opportunities. The API technology advantages include:

■ REST API simplicity for building ecosystems.

■ Easy integration for integrating apps, specifically, mobile apps with services – cloud services and enterprise business services.

■ Wider reach allowing anyone to create a new app, such as a website or a widget which can distribute services and information to new audiences and in specific contexts that can be customized to provide tailored user experiences through APIs.

■ Exposing information and services for leveraging your investment in SOA assets.

■ Providing API access allows content to be created once and automatically published or made available through many channels. Your agency’s content is ready for easy sharing and redistribution to deliver your mission directly to more citizens.

We see a lot of successful stories in cloud computing (such as Saleforce-SaaS, Google-PaaS, and Amazon-IaaS), Social Computing (such as Facebook and Twitter), Mobile computing (such as Amazon, Foursquare), and traditional eCommerce. Expedia generates more than $4 billion of revenue a year through its API-powered affiliate network. PayPal processed over $14 billion in payment transactions in 2012 and reached $27 billion in 2013 via its API-enabled business network. Figure 2 depicts both API growth and API Economy booming scene. PragrammableWeb listed 8826 public APIs on March 24, 2013 (see Figure 2), the number of public APIs is projected to reach 30,000 by 2016 by a report .

These numbers not only indicate APIs are growing quickly and the API Economy is booming, but also reflect the important of APIs and their management. In fact, the API is becoming the heart of your mobile app strategy: exposing APIs has gained traction as organization realize that leveraging their data and services across boundaries creates more innovation that drives value to all stakeholders, API Gateway is becoming a core

Page 7: Service Techmagazine

7 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

component in mobile computing architecture, API management is becoming a new front tier for enterprise SOA.

Figure 2 – API Growth and API Economy Booming

Definition of API Economy: The API Economy is the economy where companies expose their (internal) business assets or services in the form of (Web) APIs to parties with the goal of unlocking additional business value through the creation of new asset classes. (Cutter Consortium, 2013)

The above definition is based on “economy” prospective. This paper defines the API Economy from a value-added architectural style prospective:

Definition of API Economy From technical prospective: The API Economy can be defined as a software architectural style that combines modern web API capacity with API business model. It has two main principles on information resources and services:

■ Build value-add ecosystem for exposing information resources and infrastructure as well as platform resources through web-based APIs

■ Create new value-add resources via hybrid style APIs combining different type APIs – public APIs (open APIs), partners’ APIs (open to partners), and private APIs (internal APIs).

The API Economy is changing not only the way companies do business, but also the way they build their

Page 8: Service Techmagazine

8 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

service infrastructure and connect their services to customers. The API Economy is emerging in both the IT world and business world. The traditional way to expose companies’ information resources or services (1993 – 2000) mainly by web applications is moving to new API-enabled ways through multiple channels which include web, mobile devices, internet TV, connected applications as well as services, connected machines (such as cars), and partners’ applications as well as services.

Compared with traditional enterprises, API-enabled enterprises are agile and open and have the following characteristics:

1. Adopting flexible as well as simple APIs as major channels in their business

2. Enabling business transactions to be driven anywhere and anytime through API layer in service infrastructure

3. Providing web, mobile, and other client interfaces as a layer on top of APIs

4. Allowing customers to integrate with core service infrastructure directly through well-defined APIs, such as Amazon Elastic Compute Cloud and HP/IBM OpenStack APIs.

In the next section, we will show how API Economy impacts companies’ service infrastructure and becomes the driver of API management.

Driving Forces of API Management

In the last section, we described the API Economy, its history, concept, and the characteristics of API-enabled enterprises. The driving forces behind the API Economy include:

■ Business Consumers – they expect to access data and content anywhere and anytime across multiple devices and channels.

■ Business Companies – they are service providers which want to re-invent interactions with customers, supplies, and partners in cost-effective or ecosystem ways. They expect to speed business and IT innovation and increase scale cross organization boundaries.

■ Service Computing – it is based on SOA principles. All APIs are services, which connect to resources of information, infrastructure, and platform, and existing services built on SOA architectural style.

■ Cloud Computing – which allows enterprises share their resources and services cross their boundaries through public clouds or cross organizations inside enterprise through private cloud. APIs are the simply and flexible way to allow enterprise to share their resources and services internally and externally.

■ Mobile computing – mobile devices are overtaking PCs as the most broadly used devices to access information resources. Moreover, mobile computing wants a lightweight approach for connecting to enterprises’ data and services due to mobile devices limited resources. Therefore, mobile computing becomes one of the major driving forces for adopting and developing APIs.

■ Social computing – which is open to everyone and every device. Facebook and Twitter are using simple RESTful APIs to connect their social network and social services and allow developers and enterprises to integrate and access their core social platform for their business.

■ Big Data and Analytics – Big Data refers to relatively large amounts of structured and unstructured data that require machine-based systems and technologies in order to be fully analyzed. Cloud-based APIs can help

Page 9: Service Techmagazine

9 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

companies at both analyzing and distributing big digital data cheaply. The Apache open Hadoop API plus NoSQL database technology, such as MongoDB can make Big Data Analytics cost-effective, scalable, and fault-tolerant.

■ Internet of Things (IoT) and Machine to Machine (M2M) – IoT and M2M is a future technology and business, which is one of the new driving forces for the API Economy. API Economy players, such as Layer7 and Apigee predicted how M2M and IoT impacting API Economy future . The APIs will be broadly applied to IoT and M2M as smart devices’ Web interfaces connecting to IoT services. The API gateway will be one of the core components in IoT and M2M architectures.

Exposing resources and services to people and allowing developers and partners to access and integrate with companies’ core business through APIs increase opportunities and innovation. However, it also increases risks and challenges that include:

■ APIs are developer-defined interfaces to services. They are used to encapsulate complexity in application services and selectively expose functionality. Developers can build new solutions based on APIs. However, not all APIs are well defined and perform well. Using a bad API or misusing a good API will cause software system failure or performance issue. A Bad API may put your system at risk. The following two REST APIs represent a security risk. The first one puts the API key in its URL, you may get charge from the service provider if your API key is stolen by other people. The second one’s risk is more serious, since its transaction is not protected by both SSL and API key.

■ https://example.com/controller/<id>/action?apiKey=a53f435643de32

■ http://example.com/controller//action?apiKey=a53f435643de32

■ API quality assurance such as availability, scalability, reliability, security, is a main concern for enterprises using open APIs. In today’s global economy and complicated IT environment, to make a business transaction, you may need to use internal APIs to connect to core business services in your own data center, use partners’ APIs to do a B2B transaction, and you may need to use an open APIs to get additional information. Any API failure in the transaction will cause some failure of the transaction and impact your customer experience. To guarantee API infrastructure quality is a big challenge. The challenge include:

■ To guarantee API software quality, must have good API design time governance.

■ To guarantee API runtime quality, must have good API runtime governance. Modern composite applications are aggregating and consuming multiple APIs – private, partner, and public APIs at a staggering pace in order to achieve business goals. To ensure API integrity is a big challenge.

■ The API governance as extension of exiting SOA governance is new to enterprises. For instance, API testing is a must-have process in enterprise software development lifecycle, to ensure APIs are delivering the necessary level of security, reliability, and performance.

■ API service level agreements are concerns for both API providers and API consumers. To reach the agreements and delivery that the API consumers’ want is also a challenge. From a report from Parasoft, 90% of respondents report that APIs failed to meet their expectations, in which 68% encountered reliability/functionality issues; 42% met security issues; and 74% encountered performance issues.

■ API security is one of the biggest concerns for enterprises. It includes service and infrastructure access security, data security, and trust. API security compliance and protection of services as well as data

Page 10: Service Techmagazine

10 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

are challenges.

■ API consumers have risks for moving to the new API business model, since they depend on T&C of API providers.

■ API Governance is a big challenge, since APIs include internal, external, and open APIs which support different protocols, SOAP, REST, JMS, ... They are developed by different vendors, software startups, and individuals. The API governance challenges include:

■ Design Time Governance, such as API versioning, design standards, specifically new REST-style API development standards.

■ Run time governance, such as API monitoring, API deployment, and dynamic provisioning.

Facing the above risks and challenges of API Economy, API management is working to reduce the risks, providing solutions to the challenges and protecting API businesses. API management is defined in the next subsection and the relationship between it and SOA governance is discussed.

Definition of API Management

We have seen that the API Economy requires a new service infrastructure – API management that provides API governance and powers the API Economy. This section first defines API management, and then discusses the relationship of SOA and Cloud governance (Chapter 18) and API management.

Definition of API Management: The API management is a set of processes and technologies for governing APIs in a secure and scalable service infrastructure. It includes a minimum set of required functionalities:

■ API Developer Portal for managing API development and providing API lifecycle management, and the process and interface for publishing, discovering, maintaining, and overseeing APIs.

■ Automate and control connections between an API and the API consuming applications.

■ Monitor API traffic and other quality metrics, such as performance as well as reliability (for instance error rate), from applications which use it.

■ Provide proper API versioning technology to ensure consistency between multiple API implementations and versions.

■ Ensure API scale and improve application performance by dynamic provisioning technology and caching mechanisms.

■ Protect API from misuse and any other vulnerability in API access point or endpoint by providing API security solutions which include basic security, such as SSL as well as TLS, and advanced API security, such as API access authentication as well as authorization, key management, and perimeter defense for enterprise-class APIs.

■ Provide capability for metering and billing API utilization of commercial APIs.

From the definition of API management we can see that some functionalities, such as monitoring, security are the same as basic SOA governance and management. However, a lot of new functionalities provided by

Page 11: Service Techmagazine

11 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

API management, such as API developer portal, key management, and metering as well as billing capacities, are never provided by SOA management. Therefore, API management extends SOA governance and management for new API economy and improving enterprise architecture agility. By Gartner’s research, the hybrid approaches with both existing SOA governance and API management can be defined as the Application Services Governance that provides solutions and technologies for guaranteeing success of existing SOA approaches and new API economy.

Role of API Management in Service Infrastructure

API Tier in App Services Infrastructure

The API has become a tier in modern application services compute infrastructure and the API tier is playing a more and more important role. Figure 3 describes the typical API tiers in Application Services Infrastructure. There are two different API tiers:

■ API Tier between applications and middleware and/or ESB, which is in the scope of API governance and managed by API management technology, such as the API gateway. The tier is for applications consuming resources and services from backend systems. The majority of the API tier is REST-style API or Web API, and JSON is used as the data exchange format. Another popular API is SOAP-based API which is often used for consuming SOAP web services. Strictly speaking, a traditional (or classical) API is defined as an access method to a service (or a service interface, according to SOA terminology). The SOAP-based API is a kind of traditional API that can be viewed as an in process service. The Web API is a new kind of API that is a remote API service based on HTTP. We mainly discuss the API governance and management for the

API tier in this paper.

Page 12: Service Techmagazine

12 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

API tier between middleware or ESB and application services that include existing SOAP web services, Java Enterprise Edition services, .NET MCF services, messaging services, data storage, and other services that are governed and managed by enterprise SOA governance.

Figure 3 – API Tiers in App Services Compute Infrastructure

API Gateway and its Role in App Service Infrastructure

The API economy introduced a new API tier in modern application service compute infrastructure as shown in Figure 3. The API tier is becoming a critical bridge from customers to enterprise services, from enterprise to cloud services as well as your partners’ services, and from one cloud to another cloud. Further, the APIs include internal, external, and public APIs. Therefore API security, performance, routing, and multi-tenancy become very challenge for the new API-centric architecture. API management is emerging for governing and managing APIs. In general, API management consists of the following main components:

■ API Portal – which is a design-time API governance tool for managing API registry (or publishing), API profile (or documentation), API control, and API development lifecycle.

■ API Gateway – which is the core API runtime governance component for managing API runtime behaviors, such as routing, multi-tenancy, security (identity, authentication as well as authorization).

■ API Service Manager – which is a component for managing API lifecycle, such as migration, dynamic versioning, deployment, configuration, API changes (such as policy change, configuration change)

■ API Monitor – which is part of API runtime governance components for metering the API runtime behaviors, such as performance, usage.

■ API Billing or Chargeback – Billing is for utility-oriented public API, such as Amazon EC2 API, and Chargeback in case of on-premise or private cloud. Both are based on metered usage.

Page 13: Service Techmagazine

13 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

In this section, the API gateway and its role in service infrastructure are described and discussed. API gateway consists of the following main common components:

■ API routing manager

■ API security manager (such as API key management, OAuth and OpenID)

■ API mediation

For example, Layer7 has a family of API gateways that are shown in the following Table 1:

API Gateway Description

API Proxy provide the core functionalities needed for enterprise-scale API security and management

CloudConnect Gateway Provide connectivity for accessing SaaS application and other cloud services securely and seamlessly

SOA Gateway Provide centralized governance services integrated across the extended enterprise

Mobile Access Gateway Provide capacity to connect mobile devices and apps to open enterprise information assets and services securely and efficiently

Table 1 – Layer7 API Gateways

The API Gateway – lightweight service mediator simplifying application delivery stack, which acts as a control point between enterprise service infrastructure and the outside world accessed through APIs, which can provide the following main features to modern service compute infrastructure:

■ Integration – API gateways can integrate with existing Identity Management (IM) infrastructure, such as CA SiteMinder, to perform both authentication and authorization of API message traffic. API gateway can integrate with existing dynamic service provisioning and offer a highly flexible and scalable solution architecture.

■ Anypoint Connectivity – API gateways allow applications to invoke services that run anywhere as well as anytime (such as cloud services, mobile services), and allow apps to seamlessly move any services around at will without affecting existing service infrastructure.

■ Mediation – API messaging routing is one of the API gateway’s main features. It extends SOA mediation and deliver API message between service consumers and service providers. API gateway routes data, message based on user’s identity, content types, therefore it enables data and messages to be sent to appropriate applications securely. Governance – API gateways provide centralized management for API changes, API traffic, API deployment, policy enforcement, and API issue reporting.

Page 14: Service Techmagazine

14 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

■ Security – API gateways enable enterprises to secure their Web APIs against hackers’ attacks and API abuse. It can be a central security checkpoint through its support to broad security standards, such as SSO, OAuth 2.0, SAML, OpenID. For instance, an API gateway can authenticate internal clients by userid and password, and then it can issue SAML tokens that used to for identity propagation to application servers.

■ Transaction – enterprise-class API gateways also supports business transaction through meeting audit requirement as well as PCI compliance and securing sensitive data.

■ Performance – some API gateways also provide caching technology for increasing performance, such as Apigee API gateway. Some API gateway integrates XML Accelerate Engine (VXA) to make XML processing faster, such as the Oracle API gateway.

Key Takeaways

We have introduced API Governance and Management in this paper. The key takeaways are

■ Cloud computing, mobile computing and social computing drive the API Economy. It is a new IT development trend that leads IT innovation and IT alignment with its business.

■ APIs become a primary customer interface for technology-driven products and services and a key channel for driving revenue and brand engagement.

■ APIs increase exposure of enterprise services and data; therefore, increase value of in services and data.

■ API management is the key for API Economy success. It is an extension of SOA governance and management and one of core components in modern service infrastructure. It is playing a central point for API-Centric service system integration.

■ API-Centric architecture is another enterprise architecture shift. Adopting API-Centric enterprise architecture can improve security, agility, scalability, and cost-effectiveness of the IT service infrastructure.

Page 15: Service Techmagazine

15 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

Longji Tang serves as a Senior Technical Advisor at FedEx’s Information Technology Division where he has acted as a tech lead and/or architect on several critical eCommerce projects. Currently, Longji is the lead project manager for FedEx.com’s Data Center Modernization project. His research focuses on software architecture and design, service-oriented architecture, service-oriented cloud computing and application, and system modeling and formalism. Prior to his tenure with FedEx, Longji worked from 1995-2000 as an Information System and Software Engineering Consultant at Caterpillar and IBM. He has published more than 20 research papers from numeric analysis to computer applications in Journal of Computational Mathematics, Acta Mathematica Scienia and other publications. After graduating from Hunan University with a Bachelor of Engineering degree in Electrical Engineering in 1980, he worked as an associate research fellow at the Hunan Computing Center from 1980 to 1992. He began graduate studies at Penn State University in 1992 and graduated in 1995 with a Master of Engineering degree in Computer Science & Engineering and a Master of Art degree in Applied Mathematics. Longji has undertaken his PhD studies in Software Engineering as a part-time student at the University of Texas at Dallas since June, 2002. He obtained his PhD degree in 2011.

Contributions

■ Enterprise Mobile Services Architecture: Challenges and Approaches - Part III

■ Enterprise Mobile Services Architecture: Challenges and Approaches - Part II

■ Enterprise Mobile Services Architecture: Challenges and Approaches Part I

■ Modeling and Analyzing Enterprise Cloud Service Architecture - Part I

■ Modeling and Analyzing Enterprise Cloud Service Architecture - Part II

■ SLA-Aware Enterprise Service Computing - Part II

Dr. Mark Little is VP Engineering at Red Hat where he leads JBoss technical direction, research, and development. Prior to this he was the SOA Technical Development Manager and the Director of Standards. He was also the Chief Architect and Co-Founder at Arjuna Technologies, as well as a Distinguished Engineer at Hewlett Packard. He has worked in the area of reliable distributed systems since the mid-eighties. His Ph.D.f was on fault-tolerant distributed systems, replication, and transactions. He is currently also a professor at Newcastle University.

Contributions

■ API Governance and Management

Longji Tang

Mark Little

Page 16: Service Techmagazine

www.arcitura.com/workshops

Q1 2015

Certified Cloud Storage Specialist

Certified SOA Consultant

Certified Big Data Science Professional

Certified Cloud Architect

Certified Big Data Scientist

Certified SOA Governance Specialist

Certified SOA Architect

Certified Cloud Architect

Certified SOA Architect

January 5-7, 2015

Fairfax, VA, United States

March 2-6, 2015

Virtual (PST)

February 9-11, 2015

Toronto, ON, Canada

January 18-22, 2015

Dubai, UAE

March 23-27, 2015

Fairfax, VA, United States

February 9-11, 2015

Virtual (PST)

January 26-30, 2015

Fairfax, VA, United States

November 23-27, 2015

Naarden, Netherlands

February 16-20, 2015

Bangalore, India

Workshop C

alendarSOA Architect CertificationJanuary 12-16, 2015Virtual (PST)

Cloud Architect CertificationJanuary 18-22, 2015Dubai, UAE

Big Data Scientist CertificationJanuary 19-23, 2015London, UK

Cloud Technology Professional CertificationJanuary 21-23, 2015Las Vegas, NV, United States

Cloud Virtualization Specialist CertificationJanuary 26-28, 2015Virtual (PST)

SOA Architect CertificationJanuary 26-30, 2015Fairfax, VA, United States

SOA Architect CertificationFebruary 2-6, 2015Utrecht, Netherlands

SOA Architect CertificationFebruary 2-6, 2015Toronto, ON, Canada

Cloud Architect CertificationFebruary 2-6, 2015Sydney, NSW, Australia

Big Data Science Professional CertificationFebruary 9-11, 2015Toronto, ON, Canada

SOA Governance Specialist CertificationFebruary 9-11, 2015Virtual (PST)

SOA Architect CertificationFebruary 9-13, 2015Cape Town, South Africa

Cloud Professional CertificationFebruary 12-13, 2015Petaling Jaya, Malaysia

Cloud Architect CertificationFebruary 16-20, 2015Fairfax, VA, United States

SOA Architect CertificationFebruary 16-20, 2015Bangalore, India

Big Data Scientist CertificationFebruary 18-20, 2015Virtual (PST)

Cloud Storage Specialist CertificationFebruary 23-25, 2015Virtual (PST)

Cloud Architect CertificationFebruary 23-27, 2015Naarden, Netherlands

Cloud Technology Professional CertificationMarch 2-4, 2015Chennai, India

SOA Consultant CertificationMarch 2-6, 2015Virtual (PST)

Cloud Architect CertificationMarch 2-6, 2015Las Vegas, NV, United States

SOA Architect CertificationMarch 8-12, 2015Dubai, UAE

Big Data Science Professional CertificationMarch 9-11, 2015Bangalore, India

SOA Architect CertificationMarch 9-13, 2015Melbourne, VIC, Australia

SOA Architect CertificationMarch 9-13, 2015Frankfurt, Germany

SOA Architect CertificationMarch 15-19, 2015Riyadh, Saudi Arabia

Big Data Consultant CertificationMarch 16-20, 2015London, UK

SOA Architect CertificationMarch 16-20, 2015Las Vegas, NV, United States

Cloud Technology Professional CertificationMarch 18-20, 2015Naarden, Netherlands

SOA Architect CertificationMarch 22-26, 2015Dubai, UAE

Big Data Scientist CertificationMarch 23-27, 2015Fairfax, VA, United States

Cloud Architect CertificationMarch 23-27, 2015Bangalore, India

Cloud Technology Professional CertificationMarch 30 - April 1, 2015Virtual (PST)

Cloud Professional CertificationApril 16-17, 2015Petaling Jaya, Malaysia

Cloud Architect CertificationMay 18-22, 2015Naarden, Netherlands

Cloud Technology Professional CertificationJune 24-26, 2015Naarden, Netherlands

Cloud Technology Professional CertificationSeptember 23-25, 2015Naarden, Netherlands

Cloud Architect CertificationNovember 23-27, 2015Naarden, Netherlands

Cloud Technology Professional CertificationNovember 30 - December 2, 2015Naarden, Netherlands

Page 17: Service Techmagazine

17 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

Security and Identity Management Applied to SOA - Part II by Jose Luiz Berg, Project Manager & Systems Architect, Enterprise Application Integration (EAI)

Web Services

To understand how to integrate Web Services with security infrastructure, we must first define some fundamental concepts. We have already said in the previous chapter that the great challenge of security with respect to Web Services, is that they break the boundaries between applications, transforming all applications in a single big one. This statement is not true only regarding to Web Services, but as for any technology allowing remote execution of routines. In this document, when you read Web Services, we are meaning remote services, whatever the technology used. According to Oasis, a service has the following definition:

“A service is a mechanism to enable access to one or more capabilities, where the access is provided using a prescribed interface and is exercised consistent with constraints and policies as specified by the service description.1 A service is provided by an entity – the service provider – for use by others, but the eventual consumers of the service may not be known to the service provider and may demonstrate uses of the service beyond the scope originally conceived by the provider.”

So, despite the objective of this document is the integration of Web Services with security infrastructure, where allowed, the term “service” is used to designate remote functionalities made available by an application, so that the same definition can be applied to any technology used. The term Web Service (WS) is used only when we drill down into the form of operation specific to Web Services.

When we talk about WS, we are assigning sets of functionalities made available by applications, which may be consumed by sending messages using high-level protocols such as SOAP or REST, and a means of transport such as HTTP or TCP/IP.

The challenge of building the security architecture for WS is to reconcile the internal systems development standards with market standards and the functionalities provided by security systems, in order to obtain an efficient pattern, easy to deploy, and where possible, compatible with other solutions available in the market. To meet these requirements we are going to consider the use of the WS-Security standard, developed by Oasis and a well know reference in the market today, being supported by the majority of the products.

WS-Security

The WS-Security standard was developed by Oasis, for addressing security requirements to WS. Unlike other standards such as Liberty Alliance and OpenID, which can also be used in Web pages, WS-Security is geared directly for use in service calls, made by a program, without human interaction.

As the standard was designed to be used in SOAP WS, data is always added within the tag “Header” of the message, using the schema “.XSD “defined by Oasis. As an industry standard, is implemented in numerous application servers and application firewalls, ensuring that the infrastructure will be compatible with market products. Does not fit within the scope of this document detail the WS-Security standard, but only the main services that are relevant to our study:

Page 18: Service Techmagazine

18 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

■ Encryption – allows the partial or total encryption of the message by setting the encrypted blocks and algorithms required to perform the decryption. Public keys can also be included in the message, avoiding that they need to be previously known for decryption.

■ Digital signature – the same way as in the encryption, signatures may be applied over the entire message or part of it, generating hashes using asymmetric encryption, and also including in the header of the message all information necessary to perform hash validation.

■ Authentication – supports various authentication formats, through the inclusion of the user data in the message, using a component named “token”. Supports several types of tokens, such as login/user binary tokens (X509 or Kerberos) or XML tokens, supporting the SAML assertion standard. In all cases, the tokens are digitally signed, ensuring that they cannot be changed over the wire.

As the necessary information for operations are always included in the header of the message, it is possible that all security validation can be done by a server without even knowing the rest of the message content. There are also several libraries of routines available in the market that implement the pattern, and may be used in the client or in the application server, to generate or validate messages. One of the most modern library today is Apache XCF. With XCF, is possible to handle many features and message formats, with support for the following technologies:

■ Support for JAX-WS 2. x client and server

■ JAX-WS API 2. synchronous, asynchronous and one-way x

■ JAX-WS API 2. x Dynamic Invocation Interface (DII)

■ Support for JAX-RS RESTful clients

■ Support for wrapped styles and non-wrapped

■ Support for XML messaging API

■ Support for JavaScript and ECMAScript 4 XML (E4X)-client and server

■ Support for CORBA

■ Support for JBI with ServiceMix

The main problem for the implementation of Ws-Security standard is the complexity in the construction of the message, which is quite easy in the case of Java, with the use of XCF, and .NET systems using Microsoft WSE library. For PHP applications, can be used the WSO2 WSF/PHP, implementing a smaller set of functionality, however, reaching normal needs.

Security components

Established the standards which may be used by services, we are now re-examining security components, establishing how and where they will be implemented in the architecture. Whenever possible, the term service is used to denote a generic service, on any technology, and WS when is a specific detail to Web Services.

Confidentiality

The use of encryption in the communication channel is a requirement that strongly affects the performance of application servers. The cost of decrypting the entire message is high, and then should be used whenever the data is quite sensitive, giving preference to encrypt only necessary data within the message. In case was

Page 19: Service Techmagazine

19 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

considered that channel encryption is necessary, one should consider the possibility of accelerating the URL through reverse proxies and use SSL only to then, forwarding the message using HTTP to the application servers, centralizing the payload of encryption and exempting the servers responsible for implementing the business routines. With this separation of tasks, you have full visibility over the cost of communications and business processing, and for high loading implementations, you have the option of using hardware-accelerated decryption. In some cases, the services are executed both from clients and from other servers. In this case, you may mix various endpoints with different encryption schemes for each case.

Figure 1 – Executing a WS from a client and among application servers

The definition of which data needs to be kept confidential is part of the definition of business service being implemented, and should be part of its requirements specification. In addition to obvious fields such as login and password, there may be numerous other fields that should not be disclosed, usually involving monetary values, internal identifiers, private personal data or even internal application passwords.

A possible attack with the breach of confidentiality would be monitoring valid messages searching for relevant information, such as credit card numbers, customer code, valid transaction numbers, and then build a fake message using these data, which could be accepted by the application, as it contains valid data.

Integrity

Data integrity is another requirement that must be answered along with the requirements specification of the service which will be built, because most of the time this vision is only possible for those who know deeply the meaning of the data to be processed.

The mechanism to ensure integrity is the digital signature, which can be applied over the entire contents of the message, or only on the parts indicated as sensitive. Unless the cost of processing become infeasible, a good practice is to always make the signature of all the content of the message, ensuring that can never be changed in transit.

Page 20: Service Techmagazine

20 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

One of the possible attacks that may be used about a WS is to intercept a message along the way, change any field not encrypted and send it again to the same destination. Another common attack is called “replay”, which consists of simply resubmit a message without changes, causing problems for the application, or even as a form of DoS (denial of service) attack. If this type of attack is relevant, the application may use control fields, dates, or even the hash of the message to identify and discard duplications.

In a B2C site, a WS can be used to finalize an order, including quantity of items sold. By intercepting the message, an attacker can increase the quantity. In this case, you can use the hash to identify the breach of the integrity of the message, refusing the operation. On the same site, the hacker could buy a product and resending the finalizing message many times. In this case, the hash of the message is valid, and the operation will be accepted unless any replay control was implemented.

Non-repudiation

The use of reciprocal certificate signatures depends on your client presenting valid public certificates for being used in the operation. This is easier in a B2B scenario, but not in B2C, where the end user as no experience in handling this kind of technology. This feature should be used in critical processes, normally involving high value monetary operations, where needs to be ensured that the user cannot repudiate the operation later.

The most common attack in this category is breaking the secrecy of the certificate store. Many users rely on weak passwords, write then down in a paper or simply lend their credentials for other people perform tasks in his own. Another common problem is using the same password everywhere, including sites on the Internet with inefficient credential storage. Upon discovering passwords for any user at a single site, a hacker will always try to find other places where the user has a record and try the same password. Another common way to discover the password is using free e-mail systems: many users use easy-to-remember passwords for these services because they are not critical, but later registers in other sites using the same e-mail address. After guessing your weak e-mail password, a hacker can access the functionality “forgot my password” in other sites, and the password reset will be sent to the compromised mail service.

Once again, the definition of when a mutual digital signature should be used or not, must be in business requirements, and should be established before building the service, defining which data should be signed and which type of signature applied.

Authentication

In the world of services, authentication is a lot more complex than in regular applications, because must be performed by a program, without a user to enter the password, and there is no session object to store data and control the access.

An easy solution to this problem would be to send the login and encrypted password in all services, but the problem is that to decrypt and validate password, applications would have to negotiate digital certificates, and once an application has your plain password, it may use in the wrong way, treating unsafely or booking in log files. A service is a black box to the requester, so sensitive data, such as passwords, should never be sent to services where we have no control of how they will be handled.

To resolve this problem, the solution was the use of “assertions”. An assertion is simply an XML snippet, usually containing the user ID, the date of authentication, the start and finish dates of the validity, the server and the type of authentication that was issued, and a unique identifier from authentication. A digital signature validates this XML, ensuring that it cannot be changed in the transmission. When you receive an assertion, a server can identify where authentication was issued (IDP), and validate it using the server public certificate. If

Page 21: Service Techmagazine

21 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

it is valid (your hash is correct), the IDP is trusted, is within the validity and the type of authentication matches the expectations, he then can trust that the authentication was done by the caller, and the received user is the consumer of the service. If it is necessary to execute a cascading service, the assertion may be included in the message, ensuring that the requesting user is known to all services in the chain.

The validation of assertions inserted in messages may be done in two different ways:

■ Using a reverse proxy – before being forwarded to the application server. In this case, all the WS will be accelerated by him, and any call will be forwarded to the service provider only if contains valid assertions according to the specification of the service.

■ Directly in the application server – using WS-Security libraries available for validating the assertion.

Figure 2 – Assertion is validated in the reverse proxy

In both cases, the application will never have to worry about authentication, because if the call gets into it, implies already contain the assertions specified and they are valid. The only reason for an application to access the assertion will be to seek some further details about the authenticated user necessary for its implementation.

Page 22: Service Techmagazine

22 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

However, before authenticating a service, we need to identify which users are required for authentication. There are several possibilities:

■ No authentication services – not all services require authentication. A simple service that returns the list of states for a country, as an example, does not need to identify the user who is requesting the information. Some services of very low criticality, and usually to query data, do not require authentication.

■ End-user authentication – is the most common case of authentication. Requires the credential of the user who authenticated and is using the application.

■ Service credential authentication – some services require specific credentials to run, instead of the authenticated user. In this case, the service credential should be authenticated by WS, using login and password, or preferably via digital certificate linked to the server that will consume the service. However, even in this case, it is important that the service be aware which user has requested the operation, so the assertion of the end user must also be included in the message, facilitating audit trails and reporting on whose behalf the task is being performed. Another reason we should include the assertion of the end user, is that the service being executed may need to chain a second service requiring this credential.

■ Authentication with multiple users – in some special cases, multiple credentials may be required to perform a service. When you call a call center and requests an operation, what is happening in fact, is that an operator is logged in the system, performing the operation on your behalf. The operator then requests oral confirmation of your data or typing a password or access code to confirm your identity. As we have seen above, this is also a form of authentication, which can generate an assertion. During operation, the system needs to perform some service that use service credential, so we have three assertions that can be sent: the operation is performed by the service credential, by request of the attending, on behalf of the end user. Still exists other forms of multiple authentication, as in cases of shared responsibility, in which two or more people need to authenticate simultaneously to request an operation.

Once again, the decision of which type of authentication and what credentials will be required for each business operation must be taken in accordance with business requirements, before the construction of each service.

Page 23: Service Techmagazine

23 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

Figure 3 – This is an example of using multiple assertions: the user executes the service A using his assertion; however, this service authenticates using digital certificate and executes service B, including both assertions; then service B needs to execute another service outside

the network, authenticated with a service credential; although not needed for the service B, the user assertion needs to be sent, for identifying the requestor

A critical point in the use of assertions belongs to its validity: an assertion is actually an XML that represents an access ticket. This XML can be transmitted, stored, or treated in any way, and if not changed remains valid within its period of validity. As we have no control over all locations where this assertion can pass, one of the possible attacks is the “credential hijacking”, i.e. by capturing an assertion a hacker can submit requests using it as the authenticating user. To prevent this type of attack, the assertions should always be sent encrypted, and have a short validity (typically between five and fifteen minutes). With that, even if one is caught, can only be used during this period. This short expiration time however creates a technical problem: user authentication in a Web server is attached to the browser session, and the expiration of the session is calculated relative to the last operation requested. However, the expiration of the assertion is absolute, calculated by the date of issue. Then we can have a valid session with an expired assertion. As the applications should never store the user’s password (even in memory), in this case the application should perform the logoff and forward the user to the login screen again, forcing a new authentication and receiving a new assertion. Some IDM systems allows the applications to extend the validity of an assertion, without presenting the credentials again. This must be used carefully, because if an attacker gets an assertion, he can keep renewing many times, bypassing the validity control.

Page 24: Service Techmagazine

24 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

When we use a service credential the task is a little easier, because in this case the application has the user’s password or certificate, then just request a new assertion using the credentials. If an assertion is received in a server and its validity expires during processing, since it is not possible to request new user authentication, an error must be generated and the operation should be refused.

As we already defined to meet confidentiality, the decision about encrypting the entire message or only parts is from business, and must be taken case by case, but it is important that the assertions be never transmitted unencrypted. If the message is not encrypted, so it is recommended that at least the assertion be.

Authorization

Execution of authorization policy is a task usually accomplished by applications, but today this policy is mainly oriented to the presentation layer, hiding or disabling UI elements the user does not have rights to execute. However, WS does not have UI, so the challenge is moving this traditional authorization to the code, enforcing that be authorized even when the execution bypass the presentation layer. Of course that, UI elements must still be controlled according to user rights, so a duplicate validation must be performed.

One of the great advantages of modern IDM systems is using RBAC (role based access control) paradigm. This means that access rights should be granted according to the user’s functional roles, regardless of the permission in each system. Thus, by assigning a role to a user in HR, he automatically would receive all the permissions that are required on all systems to perform the assigned role, and additional rights requests would be required only for exceptions, or any temporary tasks. Using this model, the management of profiles would be much easier than using the traditional model of assigning system roles and groups. However, this cultural change takes time, and the vast majority of IDM implementations keep the concepts of system roles. Therefore, every application need to set their roles and assign them to users who requested. These are the roles that are normally validated in the application servers, typically using ACLs.

When we map this functionality for services, not much changes, because the roles to be validated are the same, but each routine of an application that is provided as a service, necessarily must perform the authorization before its execution. The validation can be made through the same ACLs used for UI, but it is important that the roles required for the execution of a service must also be defined in the requirements specification of the service, so that they can be created and included in the validation.

When generating a SAML assertion for authentication, the IDP (identity provider) may include any necessary user attribute as additional parameters. With this functionality, would be possible to include all the roles a user have assigned, facilitating the authorization process. The problem is that as our roles are still dependent on the applications, and using SSO (single sign-on), we don’t know what applications the user will access, you need to include all roles in assertions, which would increase the size of the message, reducing performance. Therefore, it is reasonable that there is some mechanism that allows the PEP (policy enforcement point) to check which roles the user is assigned to, for validating against ACL.

In addition to checking the user roles against the ACL, there are several other business authorizations, normally mixed to the code of regular operations. It is common to check for approval limits, areas of actuation, discount limits, and many others situations where the authorization belongs to business rules. Chaining service calls is a critical case of authorization policy, because the authorization should be validated before any service is executed. If a service for which a user has rights is executed, performs a part of the transaction, and executes a chained service, and the user does not have rights to execute this second service, may be necessary to roll back the first operation to maintain data consistency. Therefore, the permissions that are required to execute a service must also include permissions to perform all the cascading services. For this reason, separating the authorization code from operation code is a good practice, which can facilitate this task and avoid inconsistencies.

Page 25: Service Techmagazine

25 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

Privacy

The implementation of privacy criteria depends almost entirely on the definition of business, because only by knowing the information we know its privacy level, and under which conditions may be used.

The normal tools to ensure privacy are encryption and RBAC access restrictions, but we must also be careful especially in recording audit trails and logs and also in data storage in databases and other types of files, so they are made according to the privacy level required for each piece of information.

Availability

Service availability does not generate many constraints for their development, but it is important that operations with some critical requirement in this sense be monitored to ensure they meet the requirements. This is only possible if this requirement has been identified before the construction of the service.

Audit

Using services affects directly the audit routines, primarily for its distributed nature. For the generation of audit trails to be effective you need to consider all the existing systems and monitor all services and servers to identify when an operation started in one application, but also performed tasks in other applications inside the same business transaction.

The easiest way to do this is to create transaction identifiers, normally associated with the assertionID attribute, which is part of the assertion and is created at the time of authentication. In legacy systems, the record of transactions is done through user’s login, which may cause confusion if the user is authenticated in more than one station. The assertionID, however, identifies each particular authentication. If the user opens two different browsers, logs to the same system, and executes the same operation in both, each operation is going to have different assertionIDs. The challenge is that generating this kind of audit trail is not usual for developers, who normally considers that regular logs are enough for auditing. There are many systems on the market specialized in capturing and generating audit events, through the receipt of messages from applications containing audit data. To receive these messages, these systems utilize transaction ids to define correlations between data and identify each business operation. Of course, that this can also be done using log files, but would be very much more easy and effective if you develop your system already including these information. Once more, identifying the boundaries for the transactions and which operations must be done by business specialists, and defined in business requirements.

Technical Recommendations

So far, we have identified the components of security, and mapped out how they affects the construction and use of Web Services, and how they should be implemented in the corporate infrastructure seamlessly to IDM. Now let us get down to some safety recommendations, indicating some best practices.

The security of services is a new discipline, and several gaps still exist that need to be filled to establish standards that can be considered relatively safe. In addition, until all these practices are assimilated by the systems architecture and internal development teams will take some time, so until there, some trade-offs can be made which may help in your implementation:

■ Before there is a culture of using the WS-Security standard, it can be assumed that any WS built that requires authentication should use SSL at the transport layer. Thus, we avoid the complexity of partial encryption of messages for clients, unless there are specific requirements (i.e. performance).

Page 26: Service Techmagazine

26 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

■ Any WS that can be called from outside the corporate network necessarily needs to be authenticated, and then, in accordance with the previous recommendation, use SSL.

■ Special care must be taken with the security standards used in the market, because some are old and have known vulnerabilities, so the minimum configuration should consider AES or 3DES encryption, SHA256 signatures and certificates with minimum 2048-bit keys.

■ Digital certificates for use as service credential should be generated related to the server where it will be used, have not too long validities (one or two years), and the revocation list must be made available regularly.

■ Upon implementation of services security, the certificate infrastructure must be strengthened, because it will be essential for the operation of internal applications. Therefore, it is important to design a more robust structure of PKI, including the possibility of adding an HSM (hardware security module) to architecture, to handle the creation and safe storage of these certificates.

■ To record application logs, the best existing technology today is the Log4J, or their variations: Log4NET and Log4PHP, which can be used for Java, C# or PHP, respectively. However, they serve mainly for the application log. For the audit trail must be negotiated with the audit team the best technology to be used. A simple solution would be to use the Log4J configured with Syslog loggers, but however by establishing a structured message pattern completely different from normal texts written in application logs.

■ One of the main points of weakness in applications today, much used by hackers in attacks like “cross-site scripting” or “SQL injection” is the validation of data entry. As Web Services also serve as input for the systems, the same way as in the pages of the Web, applications should constrain and validate the data received before processing them. How to control and validate data entry is just out of scope of this document, but it is important to establish that makes no sense to implement security if the services remains open to such attacks.

Conclusion

The purpose of this document was not to establish standards for implementing security of services, but rather provide teams of systems architecture and development with technical allowances for these security standards be established. After this step, standards, patterns, norms and artifacts should be built for each case, aligned with your security policy, which should be disseminated to software factories and development teams, and be verified when the application is released, to ensure their adherence to the standards.

In addition to the definition of standards, it is important that architectural components be constructed, for making all these tasks as easy and transparent as possible to developers. If possible, these components should be installed on application servers, in order to enhance their use and adherence to standards.

The main message of this document is that makes any sense to use the most sophisticated firewalls and network controls, if your system maintains services that run without any security. Is the same as locking the front door, but leaving the back door open. The most vulnerable point will always initiate an attack, and that point will be the security level of your company. Today, the lack of knowledge and safety standards in the development of systems is one of the leading and most critical security failures of companies.

Building services is not an easy or cheap activity. A service is a piece of code that is executed by a request that comes from another computer, and has no display or user to validate their execution. In fact, it runs silently, and how implementation is not cheap, is used to run critical business operations. Therefore, unlike the existing common sense today, the safety recommendations should be specially strengthened for the services, because

Page 27: Service Techmagazine

27 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

any irregular operation will only be identified by its result, usually a long time after, hindering the identification of the author, and therefore its subsequent correction.

For all these considerations, it is very important the definition of strict standards and best practices, and the involvement of the company’s business areas to ensure that requirements are identified and met. In virtually all components of security, business information are necessary for its effective application, then real security is not made with technical features like encryption or fingerprint readers, but is a set of actions and information that must be used in combination to achieve the goals.

As it is common to hear in the area of security that “If simply closing doors would mean security, games at major stadiums should have no audience”. Security is exactly maintaining only the required ports open, but having absolute control of who is coming in, what he can do and what he had done. This control can only be achieved with correct and up-to-date information, and when the standards are established and followed by all.

Bibliography and References

■ O’Neill, Mark (1/31/2003). Web Services Security (Application Development). McGraw-Hill.

■ Stuttard, Dafydd; Pinto, Marcus (8/31/2011). The Web Application Hacker’s Handbook: Finding and Exploiting Security Flaws. Wiley.

■ Jothy Rosenberg; Remy, David (5/22/2004). Securing Web Services with WS-Security: Demystifying WS-Security, WS-Policy, SAML, XML Signature, and XML Encryption. Sams Publishing.

■ Harding, Christopher; Mizumori, Roger; Williams, Ronald. Architectures for Identity Management. The Open Group.

■ Skip Slone & The Open Group Identity Management Work Area. Identity Management. The Open Group.

■ OASIS Web Services Security (WSS) TC. WS-Security Core Specification 1.1. Oasis.

■ OASIS Web Services Security (WSS) TC. Username Token Profile 1.1. Oasis.

■ OASIS Web Services Security (WSS) TC. SAML Token profile 1.1. Oasis.

■ OASIS Reference Architecture Foundation for Service Oriented Architecture Version 1.0, Committee Specification 01, December 4, 2012

■ Navigating the SOA Open Standards Landscape Around Architecture, a Joint Paper by The Open Group, OASIS, and OMG, July 2009

■ OASIS Reference Model for Service Oriented Architecture 1.0, Official OASIS Standard, October 12, 2006

Page 28: Service Techmagazine

The Cloud Storage Specialist Certification is Arriving!

CCP Module 13 Fundamental Cloud StorageThis course expands upon the cloud storage topics introduced by Module 2 by further exploring cloud storage devices, structures, and technologies from a more technical and implementation-specific perspective. A set of cloud storage mechanisms and devices are established, along with in-depth coverage of NoSQL and cloud storage services.

See more at www.cloudschool.com/courses/module7

CCP Module 14 Advanced Cloud StorageA number of advanced topics are introduced in this course, including persistent storage, redundant storage, cloud-attached storage, cloud-remote storage, cloud storage gateways, cloud storage brokers, Direct Attached Storage (DAS), Network Attached Storage (NAS), Storage Area Network (SAN), various cloud storage-related design patterns, and the overall information lifecycle management, as it applies specifically to cloud-hosted data.

See more at www.cloudschool.com/courses/module8

CCP Module 15 Cloud Storage LabA hands-on lab during which participants apply the patterns, concepts, practices, devices, and mechanisms covered in previous courses, in order to complete a series of exercises that pertain to solving cloud storage problems and creating cloud storage architectures.

See more at www.cloudschool.com/courses/module9

Page 29: Service Techmagazine

29 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

Security and Identity Management Applied to SOA - Part II by Thomas Erl, Arcitura Education Inc., Clive Gee, Executive Consultant, IBM Software SOA Advanced Technology Group, Jürgen Kress, Oracle, Speaker, Author, Berthold Maier, Enterprise Architect, T-Systems International department of Telekom Germany, Hajo Normann, Oracle ACE Director, Pethuru Raj, SOA Specialist, Wipro Technologies, Leo Shuster, SOA Architect, National Bank, Bernd Trops, Senior Principal Consultant, Talend Inc., Clemens Utschig-Utschig, Chief Architect, Shared Service Centre, Global Business Services, Boehringer Ingelheim, Philip Wik, Redflex, DBA, Torsten Winterberg, Business Developement and Innovation, Opitz Consulting

The following is an excerpt from the new book “Next Generation SOA: A Concise Introduction to Service Technology & Service-Orientation”. For more information about this book, visit www.servicetechbooks.com/nextgen.

The convergences of modern SOA practices with service technologies have been creating opportunities to form new business relationships and operational models. Intended to inspire the construction of custom models for organizations in any industry, a series of innovative models that highlight the potential of next generation SOA is explored in this chapter.

The Enterprise Service Model

The enterprise service model combines capability, business processes, organization models, and data models into a single unified view of the business and its development priorities. All of the industry models described in the upcoming sections rely on the participation of one or more service-enabled organizations and, correspondingly, the existence of one or more enterprise service models.

As a conceptual simulation of how an enterprise operates, this type of model can be applied to any organization. Developing such a model for an enterprise is valuable because any of the services contained therein can be delivered directly by IT assets using automated business processes or delivered as transactional units of business logic.

A unified model defines a physical inventory of services for implementation as IT assets and provides a common language that can be used by both business and IT professionals to better understand the other’s priorities, needs, and expectations. This alignment of IT and business encourages the development of IT solutions that can map accurately to and better support business processes, which in turn enhances business efficiency in the ability to capitalize on new opportunities and respond to new challenges. While next generation service-oriented enterprises already tend to use some service technologies to optimize business operations and achieve strategic business goals, new business opportunities can uniquely drive IT to embrace other, more diverse service technologies in an effort to leverage best-of-breed offerings.

Enterprises can have a large inventory of shared and deployed business services ranging from basic business transactions to automated, complex, or long-running business processes. With a well-defined enterprise service model of primary business activities, enterprises can prioritize solutions and leverage business models that provide the foundation for reusable services. Solutions might include discovering new potential business partners, comparing vendor deals, and on-boarding new vendors. A well-defined service model offers a service consumer-service provider approach to conducting business between operating units within the enterprise and between the enterprise and its business partners.

Page 30: Service Techmagazine

30 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

Next generation SOA allows for the creation of a complete ecosystem that connects and supports both business and IT, providing full integration of business objectives, operations and processes, standards, rules, governance, and IT infrastructure and assets. Enterprises can base their information models on industry standards to facilitate the interoperability of custom services with business partners and other third parties.

The first step in developing an enterprise service model is to define high-level services that are then decomposed into progressively finer-grained services representing business activities, processes, and tasks. The service inventory contains all of the services from the service model that have been physically realized as IT assets. These services can be purchased commercially, developed internally, or provided by third parties.

The service approach readily identifies repeated tasks that are common to multiple different business units and business processes. Reusable services that perform these repeated tasks should undergo automation only once to avoid unnecessary duplication and simplify the overall complexity of the IT domain. Some utility-centric services, such as those that provide security, monitoring, and reporting-related processing, are highly reusable across all business domains. Since the physical services in the inventory mirror business processes, activities, and tasks, monitoring their execution can provide a realtime picture of how the enterprise is performing relative to its business targets, which is generally unachievable with commercial application packages.

The Virtual Enterprise Model

In the virtual enterprise model, companies join together in a loose federation to compete with major players in the same industry. The virtualization of a collective enterprise enables the member enterprises to collaborate on a specific business opportunity, and affords them the freedom of rapidly disbanding with relatively little impact on the individual enterprise. A virtual enterprise is a dynamic consortium of small and medium enterprises (SMEs) that have agreed to combine efforts to create a joint product or to bid for a major contract. Large corporations may also form consortia for large-scale projects. By leveraging cloud computing advances, virtual enterprises can become indistinguishable from physical enterprises as far as externally-facing customers and users are concerned, since they typically have minimal physical presence and often little to no in-house infrastructure.

Members of the consortium may compete with each other outside the agreed scope of the virtual enterprise’s area of operations. This model allows small businesses to compete for major contracts or create products of higher complexity. Each consortium member contributes their existing skills and capabilities, and benefits from the ability to collectively achieve a result that none could accomplish individually. Opportunities, profits, and risks are shared across the consortium.

In this highly flexible model, virtual enterprises can form, expand, contract, and dissolve rapidly and inexpensively to meet market opportunities after establishing collective trust. Effective governance is required to coordinate the efforts of individual consortium members, and SOA technology can enable the integration of supply chains across the entire virtual enterprise. Service contracts and interfaces provide for clear communication between consortium members, while facilitating the addition and withdrawal of members to and from the virtual enterprise without requiring major changes to their infrastructure.

Many cross-enterprise business processes can be automated. The monitoring and reporting of automated processes and transactional service executions provides consortium members with accurate, realtime data on the state and operations of the virtual enterprise. This business model is mainly relevant for the manufacturing, distribution, retail, and service industries, as well as business opportunities provided by one-time events like the Super Bowl or Olympic Games.

A simple but promising variant of this approach would be an entrepreneurial organization whose business model is to act as a virtual holding company. A virtual holding company creates and manages virtual

Page 31: Service Techmagazine

31 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

enterprises without being an active participant in the manufacturing of products or service offerings.

The Capacity Trader Model

In the capacity trader model, IT capacity is sold to customers as a commodity in a cloud computing environment. Parties with spare IT capacity sell to clients who require extra capacity. IT capacity traders buy and sell IT capacity to commercial users. Typically, these users operate in a different time zone and will use the purchased capacity outside of the capacity trader’s normal working hours. Capacity may also become available as the result of an oversized data center, a reduction in processing demand caused by business losses, or an overt business strategy.

Some organizations use the capacity trader model as a foundational business model to create IT capacity for sale to commercial users, while others offer capacity brokerage services and sign up multiple small capacity traders to create a high-capacity bundle that can be marketed at a premium. The capacity trader model is the 21st-century equivalent of the data center of the 1970s. Amazon.com, Inc. was the first company to sell its extra computing capacity, and many large computer companies have adopted this model to follow in its footsteps.

The Enhanced Wholesaler Model

According to the enhanced wholesaler model, the high speeds at which service-oriented automation enables wholesalers to receive contract bids from suppliers allow the wholesalers to respond more dynamically to demand, reduce, or even eliminate storage costs, and maximize profits. Traditional wholesalers buy products from multiple suppliers to sell to individual customers. The enhanced wholesaler model relies on one-stop shopping to meet customer needs for a range of products and reduce unit costs by purchasing large quantities from individual suppliers.

This model is in sharp contrast to the base wholesaler business model, where the wholesaler purchases goods or services from suppliers to sells them to customers at a profit. The enhanced wholesaler can secure the best deals from many potential bidders, and, if necessary, combine their offerings to meet each customer’s requirements. It can further charge a commission for locating and introducing customers to suppliers.

Service technology improves on the enhanced wholesaler model by enabling the wholesaler to expand its network of suppliers and customers. The creation, enforcing, and monitoring of formal contracts helps the wholesaler maintain multiple business relationships, while the global nature of the Web has increased opportunities to trade over great distances. Warehousing costs may be eliminated in some cases by using drop shipping, where the manufacturer delivers the goods directly to the end user.

The Price Comparator Model

The price comparator model is where a commercial organization compares the bids of multiple competing suppliers to find the best possible deal for a potential customer. Price comparators perform the service of requesting and managing quotes from multiple competing companies for common commodities, such as insurance, hotel accommodation, or rental cars. Profits are based on commission per sale and a commission fee is typically charged to the successful vendor.

In many cases, price comparators give potential customers access to multiple quotes for common goods or services through a dedicated Web site. The visitor first enters their details to contact multiple potential vendors for different quotes before selecting a preferred option based on a combination of features and price and making the purchase. In such instances, the price comparison site takes a commission on the purchase.

Page 32: Service Techmagazine

32 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

Unlike enhanced wholesalers, price comparators never own the products they market, but simply act as intermediaries between the buyer and seller. Setup costs are low, but a substantial investment is required for advertising if the site targets private customers, as there is massive competition in some industries. Service technology enables price comparison sites to contact many potential providers in parallel and then rank and display their offerings in realtime. Financial details of the purchase transaction can be exchanged securely and promptly. This model adapts to any industry that markets goods and services to the general public.

The Content Provider Model

Content providers create information feeds containing textual, pictorial, and multimedia data for service consumers to access. Increasing availability of high-bandwidth communications has resulted in significant growth in the amount of electronically transmitted information, including items like sports feeds and movies. A content provider supplies information feeds to information aggregator organizations, such as telephone companies, the press, and commercial Web sites, that make such content available to customers for a direct fee or through funding from advertisers. The owner of an electronic asset can make that content available to a wide number of information integrators.

Piracy can be an issue, especially in the software and entertainment industries. Services provide a secure channel between the content provider and the content aggregator, while service monitoring can be implemented to automate the billing process and provide an audit trail. Multimedia, software, and e-books currently dominate the content provider model. Some content providers deal directly with retail customers rather than through content aggregators.

The Job Market Model

In the job market model, enterprises locate and hire contractors that possess the skills suitable for specific tasks. In recent years, the job market has become more dynamic and fluid. It was once common for new graduates to have a single career specialization and to even be employed by the same company their entire working life, while graduates nowadays are generally expected to have multiple specializations, employers, and careers. Increasingly more professionals are working as short-term contractors rather than as long-term employees. The job market model is a specialized form of the employment agency that maintains a database of contractors with different skill sets and qualifications to meet the specific needs of employers.

The principal differences between the job market model’s contractor job center and an employment agency is that the positions filled are short-term rather than permanent, and that the contractors may be any combination of individuals and subcontracting companies. Using a contractor job center allows both the employer and the contractor to be part of a global marketplace without having to invest in infrastructure enterprises, which can reduce per-capita employment overheads and physical infrastructure costs. Business flexibility and agility can also be increased through the use of subcontractors rather than full-time employees. The number of contractors can be rapidly scaled up or down to dynamically meet business demands.

The increasing availability of high-bandwidth connectivity will enable many employees to work from rural or suburban locations, requiring a change in culture for many traditional businesses which will now need to employ individuals that they may never physically meet. Services provide a secure and precise means of communication between all parties. Service contracts provide information about the timing of requests and responses, and service interfaces allow software developers to remotely test and integrate systems code.

Service technology can automate the bidding process for each opportunity. The SOA infrastructure can use the agency to notify individuals of all of the opportunities for which they are qualified via a variety of channels, such

Page 33: Service Techmagazine

33 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

as e-mail or instant messaging.

Most administrative processes can be automated to reduce setup and operating costs for the agency. While particularly appropriate for IT consultants, this model is likely the future of work for many professionals and administrative staff in many industries, who will either work from home or for small businesses. Contractor agents can be considered to be subcontractors in their own right. In addition to providing prospective employers with a list of candidates, they also employ the contractors themselves and are responsible for their performance. An alternative approach is to create a consultant market in which individuals or organizations bid against each other for specific contract opportunities. In this model, the contractor agency manages the bidding and vetoes or rates the bidder.

The Global Trader Model

The global trader model allows for an international marketing reach. While the Internet has certainly been successful at increasing the globalization of trade, some inhibitors still remain. The key issues involve trust, differences in commercial law and enforcement of those laws, and non-existent international standards.

Issues of trust exist whenever two organizations do business with one another. While Web standards help to provide secure communications, proof of identity, and an audit trail, they do not provide the ability to guarantee that each organization will fulfill contractual promises or that the quality of goods delivered or services performed will be satisfactory. This is especially problematic when the two organizations operate in different countries.

Differences in commercial laws and law enforcement are a problem for both enterprises and governments. Generally, enterprises cannot be confident that a foreign supplier’s government will take appropriate action if that supplier breaches a business contract. Government bodies, especially those involved in customs and taxation, want to be sure that they are kept well-informed of all transfers of goods and chargeable services into and from their countries, which can be difficult to achieve if the transfers are performed electronically.

Few industries have standards that are truly international, and many countries handle business accounting and taxation quite differently. Addresses, for example, can take many different forms around the globe, while certain countries do not use a social security number or other unique identifier for each citizen. Two types of organizations known as industry watchdogs and guarantors have been established to address various inhibitors to global trade.

Industry Watchdogs

An industry watchdog is a trusted third party that has the authority to certify companies that have met a recognized set of performance standards. This helps to promote free trade by reducing the risk of dealing with unknown suppliers. On the other hand, certification is not a guarantee of quality, and certified companies that commit a breach of trust may lose their status. In some countries, the capacity of watchdogs is limited to the regulation of companies within borders, while most regulators in the United States can only operate within an individual state.

Guarantors

Guarantors use the insurance model to provide more active protection of individual business transactions, ensuring that each of the parties involved in a specific single contract fulfills its obligations. A guarantor acts as an intermediary for commercial business transactions and reimburses the customer in the event that the

Page 34: Service Techmagazine

34 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

supplier fails to meet contractual obligations. A common method of reimbursement is for the guarantor to act as an escrow account, taking payment from the customer but not paying the supplier until the goods or services have been provided.

The guarantor can profit from this approach by earning interest on the fees held in escrow. However, reimbursing customers for high-value business transactions gone awry without a relatively high volume of business can present a risk, and excessive reimbursement can damage the guarantor’s profitability. A relationship of trust with both clients and suppliers first needs to be established in order for the escrow model to succeed. A standalone retail transaction insurer could also use this business model.

Page 35: Service Techmagazine

The Big Data Scientist Certification is Arriving!Pre-Order Pricing Will End Soon Order Now!

www.bigdatascienceschool.com/certifications/scientist

Page 36: Service Techmagazine

36 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

Contributors

Jose Luiz Berg is a long term project manager and a systems architect with Enterprise Application Integration (EAI). In the past few years, Jose focused his work on implementing Service Oriented Architecture (SOA) for large Brazilian telecommunication companies. He graduated in computer networks, but also has a lot of experience working as a programmer in commercial programming languages, in last 25 years. Jose believes that SOA is one of the most important advances in software development in last decades. As it involves not only a change in the way we work, but also a significantly changes how companies see themselves and their IT resources. This advancement may be a risk, as many companies are being convinced by bad software vendors that SOA is only creating Web services, however they are not focusing on what it really stands for. By doing so they are not realizing that this is important part of the history in the making.

Contributions

■ Security and Identity Management Applied to SOA - Part II

■ Security and Identity Management Applied to SOA - Part I

■ The Integration Between EAI and SOA - Part II

■ The Integration Between EAI and SOA - Part I

Jose Luiz Berg

Dr. Pethuru Raj has been working as a TOGAF-certified enterprise architecture (EA) consultant in Wipro Technologies, Bangalore. On the educational front, armed with the competitive UGC research fellowship, he could proceed with his research activities and was awarded the prestigious PhD degree by Anna University, Chennai, India. He then could acquire the meritorious CSIR fellowship to work as a postdoctoral researcher in the Department of Computer Science and Automation (CSA), Indian Institute of Science (IISc), Bangalore. Thereafter, he was granted a couple of international research fellowships (JSPS and JST) to work as a research scientist for 3 years in two leading Japanese universities. Dr. Raj also had a fruitful stint as a lead architect in the corporate research (CR) division of Robert Bosch, India, for 1.5 years.

Dr. Raj has more than 12 years of IT industry experience. Primarily, he has been a technical architect and currently he is providing technology advisory services for worldwide business behemoths on the transformation capabilities of enterprise architecture (EA) in synchronization with some of the emerging technologies such as the Internet of Things (IoT) / Cyber Physical Systems (CPS) / Machine-to-Machine (M2M) Integration, Big Data, Cloud and Service

Pethuru Cheliah

Page 37: Service Techmagazine

37 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

Computing paradigms, Real-time Analytics of Big data using Cloud-based NoSQL databases, Hadoop framework, etc. and Mobility. He has made use of the opportunities that came on his way to focus on a few business domains, including telecommunication, retail, government, energy, and health care.

Dr. Raj has contributed book chapters for a number of technology books that were edited by internationally acclaimed professors and published by leading publishing houses. Currently he is writing a comprehensive book with the title “The Internet of Things (IoT) Technologies for the Envisioned Smarter Planet” for a world-leading book house. The CRC Press, USA has just released his book on “Cloud Enterprise Architecture” and you can find the book details in the page http://www.peterindia.net/peterbook.html

Contributions

■ A Look at Service-Driven Industry Models

■ Envisioning Converged Service Delivery Platforms (SDP 2.0) - Part II

■ Envisioning Converged Service Delivery Platforms (SDP 2.0) - Part I

■ Envisioning Insights - Driven Connected Vehicles

■ Envisioning Cloud-Inspired Smarter Homes

■ A Perspective of Green IT Technologies

Thomas Erl is a best-selling IT author and founder of CloudSchool.com™ and SOASchool.com®. Thomas has been the world’s top-selling service technology author for over five years and is the series editor of the Prentice Hall Service Technology Series from Thomas Erl (www.servicetechbooks.com ), as well as the editor of the Service Technology Magazine (www.servicetechmag.com). With over 175,000 copies in print world-wide, his eight published books have become international bestsellers and have been formally endorsed by senior members of major IT organizations, such as IBM, Microsoft, Oracle, Intel, Accenture, IEEE, HL7, MITRE, SAP, CISCO, HP, and others.

Four of his books, Cloud Computing: Concepts, Technology & Architecture, SOA Design Patterns, SOA Principles of Service Design, and SOA Governance, were authored in collaboration with the IT community and have contributed to the definition of cloud computing technology mechanisms, the service-oriented architectural model and service-orientation as a distinct paradigm. Thomas is currently working with over 20 authors on several new books dedicated to specialized topic areas such as cloud computing, Big Data, modern service technologies, and service-orientation.

As CEO of Arcitura Education Inc. and in cooperation with CloudSchool.com™ and SOASchool.com®, Thomas has led the development of curricula for the internationally recognized SOA Certified Professional (SOACP) and Cloud Certified Professional (CCP) accreditation programs, which have established a series of formal, vendor-neutral industry certifications.

Thomas Erl

Page 38: Service Techmagazine

38 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

Thomas is the founding member of the SOA Manifesto Working Group and author of the Annotated SOA Manifesto (www.soa-manifesto.com). He is a member of the Cloud Education & Credential Committee, SOA Education Committee, and he further oversees the SOAPatterns.org and CloudPatterns.org initiatives, which are dedicated to the on-going development of master pattern catalogs for service-oriented computing and cloud computing.

Thomas has toured over 20 countries as a speaker and instructor for public and private events, and regularly participates in international conferences, including SOA, Cloud + Service Technology Symposium and Gartner events. Over 100 articles and interviews by Thomas have been published in numerous publications, including the Wall Street Journal and CIO Magazine.

Clive Gee, Ph.D., one of IBM’s most experienced SOA governance practitioners, recently retired from his post as an Executive Consultant in the SOA Advanced Technologies group. He has worked in IT for more than 30 years, during the last few of which he led many SOA implementation and governance engagements for major clients all around the world, helping them to cope with the complexities of successfully transitioning to SOA. He now lives in Shetland, United Kingdom, but travels widely and does freelance consulting, especially in the area of SOA governance.

Contributions

■ A Look at Service-Driven Industry Models

■ SOA and Information Risk Management

■ Service Development Lifecycle Controls for Creating a Service Factory

Clive Gee

As a middleware expert Jürgen works at Oracle EMEA Alliances and Channels, responsible for Oracle’s EMEA fusion middleware partner business. He is the founder of the Oracle SOA & BPM and the WebLogic Partner Communities and the global Oracle Partner Advisory Councils. With more than 5000 members from all over the world the Middleware Partner Community are the most successful and active communities at Oracle. Jürgen manages the community with monthly newsletters, webcasts and conferences. He hosts his annual Fusion Middleware Partner Community Forums and the Fusion Middleware Summer Camps, where more than 200 partners get product updates, roadmap insights and hands-on trainings. Supplemented by many web 2.0 tools like twitter, discussion forums, online communities, blogs and wikis. For the SOA & Cloud Symposium by Thomas Erl, Jürgen is a member of the steering board. He is also a frequent speaker at conferences like the SOA & BPM Integration Days, JAX, UKOUG, OUGN, or OOP.

Jürgen Kress

Page 39: Service Techmagazine

39 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

Contributions

■ A Look at Service-Driven Industry Models

■ Cloud Computing and SOA

■ SOA and Business Processes: You are the Process!

■ MDM and SOA: Be Warned!

■ Event-Driven SOA

■ SOA in Real Life: Mobile Solutions

■ Understanding Service Compensation

■ Securing the SOA Landscape

■ Enterprise Service Bus

■ SOA Maturity Alongside Contract Standardization

■ Canonizing a Language for Architecture: An SOA Service Category Matrix

■ Industrial SOA

■ SOA Blueprint: A Toolbox for Architects

Dr. Mark Little is VP Engineering at Red Hat where he leads JBoss technical direction, research, and development. Prior to this he was the SOA Technical Development Manager and the Director of Standards. He was also the Chief Architect and Co-Founder at Arjuna Technologies, as well as a Distinguished Engineer at Hewlett Packard. He has worked in the area of reliable distributed systems since the mid-eighties. His Ph.D.f was on fault-tolerant distributed systems, replication, and transactions. He is currently also a professor at Newcastle University.

Contributions

■ API Governance and Management

Mark Little

Page 40: Service Techmagazine

40 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

Leo Shuster is responsible for SOA strategy and execution at National City Bank.

He has over 15 years of IT experience and throughout his career, has performed a variety of roles including group manager, team lead, project manager, architect, and developer.

His experience spans a number of industries but over the past 7 years, he has been focused on the financial services sector.

Mr. Shuster holds a Masters degree in Computer Science and Engineering from Case Western Reserve University and an MBA from Cleveland State University.

He has presented on SOA and related topics for groups of all sizes at such events as the Gartner summits and the Enterprise Architecture Executive Council.

Leo Shuster

Hajo Normann works for Accenture in the role of SOA & BPM Community of Practice Lead in ASG. Hajo is responsible for the architecture and solution design of SOA/BPM projects, mostly acting as the interface between business and the IT sides. He enjoys tackling organizational and technical challenges and motivates solutions in customer workshops, conferences, and publications. Hajo leads together with Torsten Winterberg the DOAG SIG Middleware and is an Oracle ACE Director and an active member of a global network within Accenture, as well as in regular contact with SOA/BPM architects from around the world.

Contributions

■ A Look at Service-Driven Industry Models

■ Cloud Computing and SOA

■ SOA and Business Processes: You are the Process!

■ MDM and SOA: Be Warned!

■ Event-Driven SOA

■ SOA in Real Life: Mobile Solutions

■ Understanding Service Compensation

■ Securing the SOA Landscape

■ Enterprise Service Bus

■ SOA Maturity Alongside Contract Standardization

■ Canonizing a Language for Architecture: An SOA Service Category Matrix

■ Industrial SOA

■ SOA Blueprint: A Toolbox for Architects

Hajo Normann

Page 41: Service Techmagazine

41 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

He regularly blogs about advanced software architecture issues at leoshuster.blogspot.com and he can be reached at [email protected]

Contributions

■ A Look at Service-Driven Industry Models

■ Driving SOA Governance - Part III: Organizational Aspects

■ Driving SOA Governance - Part II: Operational Considerations

■ Driving SOA Governance - Part I

■ Project-Oriented SOA

Longji Tang is a Senior Technical Advisor in FedEx IT and Professor of the School of Information Science and Engineering in Hunan University. His research focuses on software architecture and design, service-oriented architecture, service computing, cloud computing, mobile computing, big data computing, and system modeling as well as formalism. He began graduate studies at Penn State University in 1992 and graduated in 1995 with a Master of Engineering degree in Computer Science & Engineering and a Master of Art degree in Applied Mathematics. Longji started his part-time PhD studies in 2005 and obtained his PhD degree in Software Engineering in 2011. He published more than 35 research papers from data science, numeric analysis, and inverse problems to SOA, cloud, and mobile computing. He is one of members of Program Committee in 2013/2014/2015 IEEE Mobile Cloud International Conference.

Contributions

■ API Governance and Management

■ Enterprise Mobile Services Architecture: Challenges and Approaches

■ SLA-Aware Enterprise Service Computing - Part II

■ SLA-Aware Enterprise Service Computing - Part I

■ Modeling and Analyzing Enterprise Cloud Service Architecture - Part II

■ Modeling and Analyzing Enterprise Cloud Service Architecture - Part I

Longji Tang

Page 42: Service Techmagazine

42 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

Bernd Trops is a Senior Principal Consultant at Talend Inc. In this role he is responsible for client project management and training.

Bernd is responsible for all Talend projects within the Deutsche Post and the introductions of new versions and components.

Before Talend, Bernd was a Systems Engineer working on various projects for GemStone, Brocade and WebGain and therefore has extensive experience in J2EE and SOA. From 2003 to 2007 Bernd Trops worked as a SOA Architect at Oracle.

Contributions

■ A Look at Service-Driven Industry Models

■ Cloud Computing and SOA

■ SOA and Business Processes: You are the Process!

■ MDM and SOA: Be Warned!

■ Event-Driven SOA

■ SOA in Real Life: Mobile Solutions

■ Understanding Service Compensation

■ Securing the SOA Landscape

■ Enterprise Service Bus

■ SOA Maturity Alongside Contract Standardization

■ Canonizing a Language for Architecture: An SOA Service Category Matrix

■ Industrial SOA

Bernd Trops

Clemens worked as Chief Architect for the Shared Service Centre, Global Business Services, Boehringer Ingelheim in architecture, master data, service management and innovation.

At the moment he works with holistic enterprise architecture that provides the methodological platform for the new master data management.

He previously worked as a Platform Architect at Oracle Inc. in the United States, where he helped to develop next product strategy as well as the SOA BPM Suite.

Contributions

■ A Look at Service-Driven Industry Models

■ Cloud Computing and SOA

Clemens Utschig-Utschig

Page 43: Service Techmagazine

43 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

■ SOA and Business Processes: You are the Process!

■ MDM and SOA: Be Warned!

■ Event-Driven SOA

■ SOA in Real Life: Mobile Solutions

■ Understanding Service Compensation

■ Securing the SOA Landscape

■ Enterprise Service Bus

■ SOA Maturity Alongside Contract Standardization

■ Canonizing a Language for Architecture: An SOA Service Category Matrix

■ Industrial SOA

■ SOA Blueprint: A Toolbox for Architects

Philip Wik is a Database Administrator for Redflex. Philip has worked for JP Morgan/Chase, Wells Fargo, American Express, Honeywell, Boeing, Intel, and other companies in a variety of applications development, integration, and architectural roles. He has published two books through Prentice-Hall: How to Do Business With the People’s Republic of China and How to Buy and Manage Income Property.

Contributions

■ A Look at Service-Driven Industry Models

■ Big Data as a Service

■ Patterns and Principles in the Real World - Part II

■ Patterns and Principles in the Real World - Part I

■ Architecting Service-Oriented Technologies

■ Thunder Clouds: Managing SOA-Cloud Risk - Part II

■ Thunder Clouds: Managing SOA-Cloud Risk - Part I

■ Service-Oriented Architecture and Business Intelligence

■ Confronting SOA’s Four Horsemen of the Apocalypse

■ Machiavelli’s SOA: Toward a Theory of SOA Security

■ Effective Top-down SOA Management in Efficient Bottom-up Agile World - Part II

■ Effective Top-down SOA Management in Efficient Bottom-up Agile World - Part I

Philip Wik

Page 44: Service Techmagazine

44 www.servicetechmag.comCopyright © Arcitura Education Inc.

Issue LXXXVI • September/October 2014

Torsten Winterberg works for Oracle Platinum Partner OPITZ CONSULTING. As a director of the competence center for integration and business process solutions he follows his passion to build the best delivery unit for customer solutions in the area of SOA and BPM. He has long-time experience as developer, coach and architect in the area of building complex mission critical Java EE applications. He is a known speaker in the German Java and Oracle communities and has written numerous articles on SOA/BPM related topics. Torsten is part of the Oracle ACE director team (ACE=Acknowledged Community Expert) and leads the DOAG middleware community.

Contributions

■ A Look at Service-Driven Industry Models

■ Cloud Computing and SOA

■ SOA and Business Processes: You are the Process!

■ MDM and SOA: Be Warned!

■ Event-Driven SOA

■ SOA in Real Life: Mobile Solutions

■ Understanding Service Compensation

■ Securing the SOA Landscape

■ Enterprise Service Bus

■ SOA Maturity Alongside Contract Standardization

■ Canonizing a Language for Architecture: An SOA Service Category Matrix

■ Industrial SOA

■ SOA Blueprint: A Toolbox for Architects

Torsten Winterberg

Page 45: Service Techmagazine

Join the thousands of members of the growing international Arcitura community. Launched for the first time in mid-2011, Arcitura Education made official social media communities available via LinkedIn, Twitter, and Facebook. These new communities join the already existing memberships of LinkedIn, Twitter, and Facebook platforms for the Prentice Hall Service Technology Book Series from Thomas Erl.

www.arcitura.com/community

Arcitura IT CertifiedProfessionals (AITCP)

Community

Page 46: Service Techmagazine

Copyright © Arcitura Education Inc.

The Service Technology Magazine is a monthly online publication provided by Arcitura Education Inc. and officially associated with the “Prentice Hall Service

Technology Book Series from Thomas Erl.” The Service Technology Magazine is dedicated to publishing

specialized articles, case studies, and papers by industry experts and professionals in the fields of service-oriented architecture (SOA), cloud computing, Big Data, semantic

Web technologies, and other areas of services-based technology, innovation, and practice.

www.servicetechnologymagazine.com

www.servicetechmag.com