open source to you - august 2014

108

Upload: sankarma

Post on 26-Dec-2015

78 views

Category:

Documents


0 download

DESCRIPTION

Open Source to You

TRANSCRIPT

Page 1: Open Source to You - August 2014
Page 2: Open Source to You - August 2014
Page 4: Open Source to You - August 2014
Page 5: Open Source to You - August 2014
Page 6: Open Source to You - August 2014
Page 7: Open Source to You - August 2014
Page 8: Open Source to You - August 2014

YOUSAID IT

8 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | PB

More content for non-IT readersI have been reading your magazine since the last few years. The company I work in is in the manufacturing industry. Similarly, your subscribers’ database may have more individuals like me from companies that are not directly related to the IT industry.

Currently, your primary focus is on technical matters, and the magazine carries articles written by skilled technical individuals, so OSFY is really helpful for open source developers. However, you also have some non-IT subscribers like us, who can understand that something great is available in the open source domain, which they can deploy to reduce their IT costs. But, unfortunately, your magazine does not inform us about open source solutions providers.

I request you to introduce the companies that provide end-to-end IT solutions on open source platforms including thin clients, desktops, servers, virtualisation, embedded customised OSs, ERP, CRM, MRP, emails and file servers, etc. Kindly publish relevant case studies, with the overall cost savings and benefits. Just as you feature job vacancies, do give us information about the solutions providers I mentioned above.

—Shekhar Ranjankar;[email protected]

ED: Thank you for your valuable feedback. We do carry case studies of companies deploying open source, from time to time. We also regularly carry a list of different solutions providers from different open source sectors. We will surely take note of your suggestion and try to continue carrying content that interests non-IT readers too.

Requesting an article on Linux server migrationI am glad to receive my first copy of OSFY. I have a suggestion to make: if possible, please include an article on migrating to VMware (from a Linux physical server to VMware ESX). Also, do provide an overview of some open source tools (like Ghost for Linux) to take an image of a physical Linux server.

—Rohit Rajput;[email protected]

Share Your Please send your comments or suggestions to: Open Source For You,The Editor,

D-87/1, Okhla Industrial Area, Phase I, New Delhi 110020, Phone: 011-26810601/02/03, Fax: 011-26817563, Email: [email protected]

ED: It’s great to hear from you. We will definitely cover the topics suggested by you in one of our forthcoming issues. Keep reading our magazine. And do feel free to get in touch with us if you have any such valuable feedback.

A request for the Backtrack OS to be bundled on the DVD

I am a huge fan of Open Source For You. Thank you for bundling the Ubuntu DVD with the May 2014 issue. Some of my team members and I require the Backtrack OS. Could you provide this in your next edition? I am studying information sciences for my undergrad degree. Please suggest the important programming languages that I should become proficient in.

—aravind naik;[email protected]

ED: Thanks for writing in to us. We’re pleased to know that you liked the DVD. Backtrack is no longer being maintained. The updated version for penetration testing is now known as ‘Kali Linux’ and we bundled it with the April 2014 issue of OSFY. For career related queries, you can refer to older OSFY issues or you can find related articles on www.opensourceforu.com

Overseas subscriptionsPreviously, I used to get the copies of LINUX For You/ Open Source For You and Electronics For You from local book stores but, lately, none of them carry these magazines any more. So how can I get the copies of all these magazines in Malaysia, and where can I get previous issues too?

—Abdullah Abd. Hamid; [email protected]

ED: Thank you for reaching out to us. Currently, we do not have any reseller or distributor in Malaysia for news stand sales, but you can always subscribe to the print edition or the e-zine version of the magazines. You can find the details of how to subscribe to the print editions on www.pay.efyindia.com and for the e-zine version, please go to www.ezines.efyindia.com

Page 9: Open Source to You - August 2014

offerS THE monTH

To advertise here, contact Omar on +91-995 888 1862 or 011-26810601/02/03 or

Write to [email protected]

www.opensourceforu.com

www.space2host.com

Get 10%discount

Free Dedicated hosting/VPS for one month. Subscribe for annual package of Dedicated hosting/VPS and getone month FREE

Reseller package special offer !

Contact us at 09841073179or Write to [email protected]

2000RupeesCoupon

No condition attached for trial of ourcloud platform

(Free Trial Coupon)

Enjoy & Please share Feedback at [email protected]

For more information, call us on1800-212-2022 / +91-120-666-7718

www.cloudoye.comwww.esds.co.in

Hurry!

Offer valid till 31st

August 2014! Hurry!

Offer valid till 31st

August 2014!

Hurry!

Offer valid till 31st

August 2014!

Free Dedicated Server Hostingfor one month

For more information, call us on 1800-209-3006/ +91-253-6636500

Onemonthfree

Subscribe for our Annual Package of DedicatedServer Hosting & enjoy one month free service

Subscribe for the Annual Packages of Dedicated Server Hosting & Enjoy Next 12 Months Free Services

Get 12 Months

Free

For more information, call us on1800-212-2022 / +91-120-666-7777

www.goforhosting.com

Hurry!

Offer valid till 31st

August 2014!

Pay Annually & get 12 Month FreeServices on Dedicated Server Hosting

Get 35% off on course fees and if you appearfor two Red Hat exams, the second shot is free.

35%off & more

Contact us @ 98409 82184/85 orWrite to [email protected]

www.vectratech.in

Hurry!

Offer valid till 31st

August 2014!

“Do not wait! Be a part ofthe winning team”

www.prox.packwebhosting.com

Contact us at 98769-44977 or Write to [email protected]

Get 25% Off

Considering VPS or a Dedicated Server? Save Big !!! And go with our ProX Plans

Hurry!

Offer valid till 31st

August 2014!

PACKWEB

P r o X

PACK WEBHOSTING

Time to go PRO now

25% Off on ProX Plans - Ideal for running High Traffic or E-Commerce Websites.Coupon Code : OSFY2014

Page 10: Open Source to You - August 2014
Page 11: Open Source to You - August 2014

Email : [email protected]

Page 12: Open Source to You - August 2014
Page 13: Open Source to You - August 2014
Page 14: Open Source to You - August 2014

CentOS 7 now availableThe CentOS Project has announced the general availability of CentOS 7, the first release of the free Linux distro based on the source code for RedHat Enterprise Linux (RHEL) 7. It is the first major release after the collaboration between the CentOS Project and Red Hat. CentOS 7 is built from the freely available RHEL 7 source code tree. The features closely resemble that of Red Hat’s latest operating system. Just like RHEL 7, it is now powered by version 3.10.0 of the Linux kernel, with a default file system. It is also the first version to include a management engine, systemd, dynamic firewall system called the firewalld, and the boot loader, GRUB2.

The default Java Development Kit has also been upgraded to OpenJDK 7, and the system now ships with open VMware tools and 3D graphics drivers, out-of-the-box. Also, like RHEL 7, this is the version of CentOS that claims to offer an in-place upgrade path. Soon, users will be able to upgrade from CentOS 6.5 to CentOS 7 without reformatting their systems.

The CentOS team has launched a new build process, in which the entire distro is built from code hosted at the CentOS Project’s own Git repository. Source code packages (SRPMs) are created as a side effect of the build cycle, and will be hosted on the main CentOS download servers.

Disc images of CentOS 7, which include separate builds for the GnOME and KDE desktops, a live CD image and a network-installable version, are also now available.

Google to launch Android One smartphones with MediaTek chipsetGoogle made an announcement about its Android One program at the recent Google I/O 2014, San Francisco, California. The company plans to launch devices powered by Android One in India first, with companies like Micromax, Spice and Karbonn. Android One has been launched

to reduce the production costs of phones. The manufacturers mentioned earlier will be able to launch US$ 100 phones based on this platform. Google will handle the software part, using Android One. So phones will get firmware updates directly from Google. This is surprising because low budget phones usually don’t receive any software updates. Sundar Pichai, Android head at Google, showcased a Micromax device at the show. The Micromax Android One phone has an 11.43 cm (4.5 inch) display, FM radio, SD card slot and dual SIM slot. Google has reportedly partnered with MediaTek for chipsets to power the Android One devices. We speculate that it is MediTek’s MT6575 dual core processor that has been packed into Micromax’s Android One phone.

It is worth mentioning here that 78 per cent of the smartphones launched in Q1 of 2014 were priced around US$ 200 in India. So Google’s Android One will definitely herald major changes in this market. Google will also provide manufacturers with guidelines on hardware designs. And it has tied up with hardware component companies to provide high volume parts to manufacturers at a lower cost in order to bring out budget Android smartphones.

FOSSBYTESA rare SMS worm is attacking your Android device!Android does get attacked with Trojan apps that have no self-propagation mechanism, so users don’t notice the malfunction. But here’s a different, rather rare, mode of attack that Android devices are now facing. Selfmite is a SMS worm attack. It is the second of such deadly viruses found in the past two months. Selfmite automatically sends SMSs to the users with their name in the message. The SMS contains a shortened URL which triggers users to install a third part APK file called TheSelfTimerV1.apk. The SMS says, “Dear [name], Look the Self-time..” Some remote server hosts this malware application. Users can find SelfTimer installed in the app drawer of their Android devices.

The Selfmite worm shows a pop-up to download mobogenie_122141003.apk, which offers synchronisation between Android devices and PCs. The app has over 50 million downloads on Play Store, but all are through various paid referral schemes and promotion programmes. Researchers at Adaptive Mobile believe that a number of Mobogenie downloads are promoted through some malicious software used by an unknown advertising platform. A popular vendor of security solutions in north America detected dozens of devices that were infected with Selfmite. The attack campaign was launched using Google. The short linked URL of this malicious app was distributed in the Google shortlink format. The APK link was visited 2,140 times. Later, Google disabled it.

Android devices detect apps from unknown and unauthorised developers. But some users enable installation authentication even for apps from ‘unknown sources’. Their devices become the targets for worms like this.

Powered by www.efytimes.com

14 | aUGUST 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com

Page 15: Open Source to You - August 2014

OSFYClassifieds for Linux & Open Source IT Training Institutes

Classifieds

WESTERN REGION

Linux Lab (empowering linux mastery)

Enterprise Linux

& VMware

1104, D’ Gold House,

Nr. Bharat Petrol Pump, Ghyaneshwer

Paduka Chowk, FC Road, Shivajinagar

Pune-411 005

Mr.Bhavesh M. Nayani

+020 60602277,

+91 8793342945

[email protected]

coming soon

www.linuxlab.org.in

Courses Offered:

Address (HQ):

Contact Person:

Contact No.:

Email:

Branch(es):

Website:

NORTHERN REGION

*astTECS Academy

Basic Asterisk Course,

Advanced Asterisk Course, Free PBX

Course, Vici Dial Administration Course

1176, 12th B Main,

HAL 2nd Stage, Indiranagar,

Bangalore - 560008, India

Lt. Col. Shaju N. T.

+91-9611192237

[email protected]

www.asttecs.com,

www.asterisk-training.com

Courses Offered:

Address (HQ):

Contact Person:

Email:

Website:

Contact No.:

IPSR Solutions Ltd.

RHCE, RHCVA,RHCSS, RHCDS, RHCA,Produced Highest number ofRed Hat professionalsin the world

Merchant'sAssociation Building, M.L. Road,Kottayam - 686001,Kerala, India

Courses Offered:

Address (HQ):

GRRASLinuxTrainingandDevelopmentCenter

CoursesOffered: RHCE,RHCSS,RHCVA,

CCNA,PHP,ShellScripting(onlinetraining

isalsoavailable)

Address(HQ):

ContactPerson:

ContactNo.:

Email:

Branch(es):

Website(s):

GRRASLinuxTrainingand

DevelopmentCenter,219,HimmatNagar,

BehindKiranSweets,GopalpuraTurn,

TonkRoad,Jaipur,Rajasthan,India

Mr.AkhileshJain

+91-141-3136868/

+91-9983340133,9785598711,9887789124

[email protected]

Nagpur,Pune

www.grras.org,www.grras.com

SOUTHERN REGION

Advantage Pro

RHCSS, RHCVA,

RHCE, PHP, Perl, Python, Ruby, Ajax,

A prominent player in Open Source

Technology

1 & 2 , 4th Floor,

Jhaver Plaza, 1A Nungambakkam

High Road, Chennai - 600 034, India

Ms. Rema

+91-9840982185

[email protected]

www.vectratech.in

Courses Offered:

Address (HQ):

Contact Person:

Contact No.:

Email:

Website(s):

Linux Learning Centre

Linux OS Admin

& Security Courses for Migration,

Courses for Developers, RHCE,

RHCVA, RHCSS, NCLP

635, 6th Main Road,

Hanumanthnagar,

Bangalore - 560 019, India

Mr. Ramesh Kumar

+91-80-22428538,

26780762, 65680048 /

+91-9845057731, 9449857731

[email protected]

www.linuxlearningcentre.com

Courses Offered:

Address (HQ):

Contact Person:

Email:

Website:

Contact No.:

Branch(es): Bangalore

Academy of Engineering and

Management (AEM)

RHCE, RHCVA,

RHCSS,Clustering & Storage,

Advanced Linux, Shell

Scripting, CCNA, MCITP, A+, N+

North Kolkata, 2/80

Dumdum Road, Near Dumdum

Metro Station, 1st & 2nd Floor,

Kolkata - 700074

Mr. Tuhin Sinha

+91-9830075018,

9830051236

[email protected]

North & South Kolkata

www.aemk.org

Courses Offered:

Address (HQ):

Contact Person:

Email:

Branch(es):

Website:

Contact No.:

Eastern Region

Duestor Technologies

Courses Offered:

Address (H.Q.):

Contact Person:

Contact Number(s):

E-mail id(s):

Websit(es):

Solaris, AIX,

RHEL, HP UX, SAN Administration

(Netapp, EMC, HDS, HP),

Virtualisation(VMWare, Citrix, OVM),

Cloud Computing, Enterprise

Middleware.

2-88, 1st floor,

Sai Nagar Colony, Chaitanyapuri,

Hyderabad - 060

Mr. Amit

+91-9030450039,

+91-9030450397.

[email protected]

www.duestor.com

Contact Person:Contact No.:Email:Branch(es):

Website:

Benila Mendus+91-9447294635

[email protected], Kozhikode,

Thrissur, Trivandrumwww.ipsr.org

Linux Training & Certification

RHCSA,

RHCE, RHCVA, RHCSS,

NCLA, NCLP, Linux Basics,

Shell Scripting,

(Coming soon) MySQL

104B Instant Plaza,

Behind Nagrik Stores,

Near Ashok Cinema,

Thane Station West - 400601,

Maharashtra, India

Ms. Swati Farde

+91-22-25379116/

+91-9869502832

[email protected]

www.ltcert.com

Courses Offered:

Address (HQ):

Contact Person:

Contact No.:

Email:

Website:

Page 16: Open Source to You - August 2014

FOSSBYTES

Calendar of forthComing eventsName, Date and Venue Description Contact Details and Website

4th Annual Datacenter Dynamics Converged. September 18, 2014; Bengaluru

The event aims to assist the community in the data centre domain by exchanging ideas, accessing market knowledge and launching new initiatives.

Praveen Nair; Email: [email protected]; Ph: +91 9820003158; Website: http://www.datacenterdynamics.com/

Gartner Symposium IT Xpo,October 14-17, 2014; Grand Hyatt, Goa

CIOs and senior IT executives from across the world will gather at this event, which offers talks and workshops on new ideas and strate-gies in the IT industry.

Website:http://www.gartner.com

Open Source India, November 7-8, 2014; NIMHANS Center, Bengaluru

Asia’s premier open source conference that aims to nurture and promote the open source ecosystem across the sub-continent.

Omar Farooq; Email: [email protected]; Ph: 09958881862http://www.osidays.com

CeBitNovember 12-14, 2014;BIEC, Bengaluru

This is one of the world’s leading business IT events, and offers a combination of services and benefits that will strengthen the Indian IT and ITES markets.

Website: http://www.cebit-india.com/

5th Annual Datacenter Dynamics Converged; December 9, 2014; Riyadh

The event aims to assist the community in the datacentre domain by exchanging ideas, accessing market knowledge and launching new initiatives.

Praveen Nair; Email: [email protected]; Ph: +91 9820003158; Website: http://www.datacenterdynamics.com/

HostingconindiaDecember 12-13, 2014;NCPA, Jamshedji Bhabha Theatre, Mumbai

This event will be attended by Web hosting companies, Web design companies, domain and hosting resellers, ISPs and SMBs from across the world.

Website:http://www.hostingcon.com/contact-us/

New podcast app for Linux is now ready for testingAn all-new podcast app for Ubuntu was launched recently. This app, called ‘Vocal’, has a great UI and design. nathan Dyer, who is the developer of this project, has released unstable beta builds of the app for Ubuntu 14.04 and 14.10, for testing purposes.

Only next-gen easy-to-use desktops are capable of running the beta version of Vocal. Installing beta versions of the app on Ubuntu is not as difficult as installing them on KDE, GnOME or Unity, but users can’t try the beta version of Vocal without installing the unstable elementary desktop PPA. Vocal is an open source app, and one can easily port it to mainstream Linux versions from Ubuntu. However, Dyer suggests users wait until the first official beta version of the app for easy-to-use desktops is available.

The official developer’s blog has a detailed report on the project.

CoreOS Linux comes out with Linux containers as a service!CoreOS has launched a commercial service to ease the workload of systems administrators. The new commercial Linux distribution service can update automatically. Systems administrators do not have to perform any major update manually. Linux-based companies like RedHat and SUSE use open source and free applications and libraries for their operations, yet offer commercial subscription services for enterprise editions of Linux. These services cover software, updates, integration and technical support, bug fixes, etc.

CoreOS has a different strategy compared to competitive services offered by other players in the service, support and distribution industries. Users will not receive any major updates, since CoreOS wants to save you the hassle of manually updating all packages. The company plans to stream copies of updates directly to the OS. CoreOS has named the software ‘CoreUpdate’. It controls and monitors

Expect Android Wear app section along with Google Play Service updateGoogle recently started rolling out its Google Play Service update 5.0 to all the devices. This version is an advance from the existing 4.4, bringing the Android wearable services API and much more. Mainly focused on developers, this version was announced in 2014. According to the search giant’s blog, the newest version of the Google Play store includes many updates that can increase app performance. These include wearable APIs, dynamic security provider, improvements in Drive, Wallet and Google Analytics, etc. The main focus is on the Android Wearable platform and APIs, which will enable more applications on these devices. In addition to this, Google has announced a separate section for Android Wear apps in the Play store.

These apps for the Android Wear section in the Google Play store come from Google itself. The collection includes official companion apps for Android devices, Hangouts and Google Maps. The main purpose of the Android Wear Companion app is to let users manage their devices from Android smartphones. It provides voice support, notifications and more. There are third party apps as well from Pinterest, Banjo and Duolingo.

Google plans to remove QuickOffice from app storesGoogle has announced the company’s future plans about Google Docs, Slides and Sheets. It has integrated the QuickOffice service in Google Docs now. So, there is no longer a need for the separate Google QuickOffice app. QuickOffice was acquired by Google in 2012. It served free document viewing and editing on Android and iOS for two years. Google has decided to discontinue this free service.

The firm has integrated QuickOffice into the Google Docs, Sheets and Slides app. The QuickOffice app will be removed from the Play Store and Apple’s App Store soon and users will not be able to see or install it. Existing users will be able to continue to use the old version of the app.

16 | aUGUST 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com

Page 17: Open Source to You - August 2014

FOSSBYTES

software packages, their updates and also provides the controls to administrators to manually update a few packages if they want to. It has a roll-back feature in case an update causes any malfunction in a machine. CoreUpdate can manage multiple systems at a time.

CoreOS was designed to promote the use of open source OS kernel, which is used in a lot of cloud based virtual servers. The CoreOS consumes less than half of instance as compared to other Linux distribution services. Applications of distributions run in a virtualised container called Docker. They can start instantly. CoreOS was launched in December last year. It uses two partitions, which help in easily updating distributions. One partition contains the current OS, while the other is used to store the updated OS. This smoothens out the entire process of upgrading a package or an entire distribution. The service can be directly installed and run in the system or via cloud services like Amazon, Google or Rackspace. The venture capital firm, Kleiner Perkins Caulfield and Byers, has invested over US$ 8 million in CoreOS. The company was also backed by Sequoia Capital and Fuel Capital in the past.

Mozilla to launch Firefox-based streaming Dongle, NetcastAfter the successful launch of Google’s Chromecast, which sold in millions, everyone else has discovered the potential of streaming devices. Recently, Amazon and Roku launched their devices. According to GigaOM, Mozilla will soon enter the market with its Firefox-powered streaming device. A Mozilla enthusiast, Christian Heilmann, recently

uploaded a photo of Mozilla’s prototype streaming device on Twitter.People at GigaOM managed to dig out more on it and even got their hands on

the prototype as soon as that leaked photo went viral on Twitter. The device provides better functionality and options than Chromecast. Mozilla has partnered with some as yet unknown manufacturer to build this device. The prototype has been sent to some developers for testing and reviews. This device, which is called netcast, has a hackable open bootloader, which makes it run some Chromecast apps.

Mozilla has always looked for an open environment for its products. It is expected

Linux Foundation releases Automotive Grade Linux to power carsThe Linux Foundation recently released Automotive Grade Linux (AGL) to power automobiles, a move that marks its first steps into the automotive industry. The Linux Foundation is sponsoring the AGL project to collaborate with the automotive, computing hardware and communications industries, apart from academia and other sectors. The first release of this system is available for free on the Internet. A Linux-based platform called Tizen IVI is used to power AGL. Tizen IVI was primarily designed for a broad range of devices—from smartphones and TVs to cars and laptops.

Here is the list of features that you can experience in the first release of AGL: a dashboard, Bluetooth calling, Google Maps, HVAC, audio controls, Smartphone Link Integration, media playback, home screen and news reader. The Linux Foundation and its partners are expecting this project to change the future of open source software. They hope to see next-generation car entertainment, navigation and other tools to be powered by open source software. The Linux Foundation expects collaborators to add new features and capabilities in future releases. Development of AGL is expected to continue steadily.

www.OpenSourceForU.com | OPEN SOURCE FOR YOU | aUGUST 2014 | 17

Page 18: Open Source to You - August 2014

FOSSBYTES

that the company’s streaming stick will come with open source technology, which will help developers to develop HDTV streaming apps for smartphones.

Opera is once again available on LinuxAustralian Web browser company, Opera, has finally released a beta version for its Linux OS. This Opera 24 version for Linux has the same features as Opera 24 on the Windows and Mac platforms. Chrome and Firefox are currently the two most used browsers on the Linux platform. Opera 24 will be a good alternative to them.

As of now, only the developer or beta version of Opera for Linux is available. We are hoping to see a stable version in the near future. In this beta version, Linux users will get to experience popular Opera features like Speed Dial, Discover, Stash, etc. Speed Dial is a home page that gives users an overview of their history, folders and bookmarks. Discover is an RSS reader, embedded within the browser. Gathering and reading articles of interest would be more authentic with the Discover feature. Stash is like Pinterest, within a browser. Its UI is inspired from Pinterest. It allows users to collect websites and categorise them. Stash is designed to enable users to plan their travel, work and personal lives with a collection of links.

Unlock your Moto X with your tattooMotorola is implementing an alternative security system for Moto X. It is frustrating to remember difficult passwords while simpler passwords are easy to crack. To counter this, VivaLnk has launched digital tattoos. This tattoo will automatically unlock the Moto X when applied to the skin.

This technology is based on near Field Communication to connect with smartphones and authenticate access. Motorola is working on optimising digital tattoos with Google’s Advance Technology and Projects.

The pricing is on the higher side but this is a great initiative in wearable technology. Developing user friendly alternatives to the password and PIn number has been a major focus of tech companies. Motorola had talked about this in the introductory session of the D11 conference at California this May, when it discussed the idea of passwords in pills or tattoos. The idea may seem like a gimmick, but you never know when it will become commonly used. VivaLnk is working on making this technology compatible with other smartphones too. It is considering entering the domain of creating tattoos of different types and designs.

OpenSSL flaws fixed by PHPPHP recently pushed out new versions for its popular scripting language, which fix many crucial bugs and, out of those, two are of OpenSSL. The flaws are not serious like Heartbleed, which popped up a couple of months back. One flaw is directly related to OpenSSL handling time stamps and the other is related to the same thing in a different way. PHP 5.5.14 and 5.4.30 have fixed both flaws.

Microsoft to abandon X-Series Android smartphones tooIt hasn’t been long since Microsoft ventured into the Android market with its X series devices and the company has already revealed plans to abandon the series. With the announcement of up to 18,000 job cuts, the company is also phasing out its feature phones and recently launched nokia X Android smartphones.

Here are excerpts of an internal email sent by Jo Harlow, who heads the phone business under Microsoft devices, to Microsoft employees: “Placing Mobile Phone services in maintenance mode: With the clear focus on Windows Phones, all Mobile Phones-related services and enablers are planned to move into maintenance mode; effective: immediately. This means there will be no new features or updates to services on any Mobile Phones platform as a result of these plans. We plan to consider strategic options for Xpress Browser to enable continuation of the service outside of Microsoft. We are committed to supporting our existing customers, and will ensure proper operation during the controlled shutdown of services over the next 18 months. A detailed plan and timeline for each service will be communicated over the coming weeks.

“Transitioning developer efforts and investments: We plan to transition developer efforts and investments to focus on the Windows ecosystem while improving the company’s financial performance. To focus on the growing momentum behind Windows Phone, we plan to immediately begin ramping down developer engagement activities related to nokia X, Asha and Series 40 apps, and shift support to maintenance mode.”

18 | aUGUST 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com

Page 19: Open Source to You - August 2014

FOSSBYTES

Other bugs which were fixed were not security related but of a more general type.

iberry introduces the Auxus Linea L1 smartphone and Auxus AX04 tablet in IndiaIn a bid to expand its portfolio in the Indian market, iberry has introduced two new Android KitKat-powered devices in the country—a smartphone and a tablet. The Auxus Linea L1 smartphone is priced at Rs 6,990 and the Auxus AX04 tablet is priced at Rs 5,990. Both have been available from the online megastore eBay India, since June 25, this year.

The iberry Auxus Linea L1 smartphone features a 11.43 cm (4.5 inch) display with OGS technology and Gorilla Glass protection. It is powered by a 1.3GHz quad-core MediaTek (MT6582) processor coupled with 1 GB of DDR3 RAM. It sports a 5 MP rear camera with an LED flash and a 2 MP front-facing camera. It comes with 4 GB of inbuilt storage expandable up to 64 GB via microSD card. The dual-SIM device runs Android 4.4 KitKat, out-of-the-box. The 3G-supporting smartphone has a 2000mAh battery.

Meanwhile, the iberry Auxus AX04 tablet features a 17.78 cm (7 inch) IPS display. It is powered by a 1.5 GHz dual-core processor (unspecified chipset) coupled with 512 MB of RAM. The voice-supporting tablet sports a 2 MP rear camera and a 0.3 MP front-facing camera. It comes with 4 GB of built-in storage expandable up to 64 GB via micro-SD card slot. The dual-SIM device runs Android 4.4 KitKat out-of-the-box. It has a 3000mAh battery.

Google to splurge a whopping Rs 1,000 million on marketing Android OneLooks like global search engine giant Google wants to leave no stone unturned in its quest to make its ambitious Android One smartphone-for-the-masses project reach its vastly dispersed target audience in emerging economies (including India). The buzz is that Google is planning to splurge over a whopping Rs 1,000 milllion with its official partners on advertising and marketing for the platform. Even as Sundar Pichai, senior VP at Google who is in charge of Android, Chrome and Apps, is all set to launch the first batch of low budget Android smartphones in India sometime in October this year, the latest development shows how serious Google is about the project.

It was observed that Google’s OEM partners were forced into launching a new smartphone every nine months to stay ahead in the cut-throat competition. However, thanks to Google’s new Android hardware and software reference platform, its partners will now be able to save money and get enough time to choose the right components, before pushing their smartphones into the market. Android One will also allow them to push updates to their Android devices, offering an optimised stock Android experience. With the Android One platform falling into place, Google will be able to ensure a minimum set of standards for Android-based smartphones.

www.OpenSourceForU.com | OPEN SOURCE FOR YOU | aUGUST 2014 | 19

Page 20: Open Source to You - August 2014

FOSSBYTES

With the Android One platform, Google aims to reach the 5 billion people across the world who still do not own a smartphone. According to Pichai, less than 10 per cent of the world’s population owns smartphones in emerging countries. The promise of a stock Android experience at a low price point is what Android One aims to provide. Home-grown manufacturers such as Micromax, Karbonn and Spice will create and sell these Android One phones for which hardware reference points, software and subsequent updates will be provided by Google. Even though the spec sheet of Android One phones hasn’t been officially released, Micromax is already working on its next low budget phone, which many believe will be an Android One device.

SQL injection vulnerability patched in Ruby on RailsTwo SQL injection vulnerabilities were patched in Ruby on Rails, which is an open source Web development framework now used by many developers. Some high profile websites also use this framework. The Ruby on Rails developers recently launched versions 3.2.19, 4.0.7 and 4.1.3, and advised users to upgrade to these versions as soon as possible. And a few hours later, they again released versions 4.0.8 and 4.1.4 to fix problems caused by the 4.0.7 and 4.1.3 updates.

One of the two SQL injection vulnerabilities affects applications running on Ruby versions 2.0.0 through to 3.2.18, which also use the PostgreSQL database system and query bit string data types. Another vulnerability affects applications running on Ruby on Rails versions 4.0.0 to 4.1.2, which use PostgreSQL and querying range data types.

Despite affecting different versions, these two flaws are related and allow attackers to inject arbitrary SQL code using crafted values.

The city of Munich adopts Linux in a big way!It’s certainly not a case of an overnight conversion. The city of Munich began

to seek open source alternatives way back in 2003.

With a population of about 1.5 million citizens and thousands of employees, this German city took its time to adopt open source. Tens of thousands of government workstations were to be considered for the change. Its initial shopping list had suitably rigid specifications, spanning everything from avoiding vendor lock-in and receiving regular hardware support updates, to having access to an expansive range of free applications.

In its first stage of migration, in 2006, Debian was introduced across a small percentage of government workstations, with the remaining Windows computers switching to OpenOffice.org, followed by Firefox and Thunderbird.

Debian was substituted for a custom Ubuntu-based distribution named ‘LiMux‘ in 2008, after the team handling the project ‘realised Ubuntu was the platform that could satisfy our requirements best.’

Linux kernel 3.2.61 LTS officially releasedThe launch of the Linux kernel 3.2.61 LTS, the brand-new maintenance release of the 3.2 kernel series, has been officially announced by Ben Hutchings, the maintainer of the Linux 2.6 kernel branch. While highlighting the slew of changes that come bundled along with the latest release, Hutchings advised users to upgrade to it as early as possible.

The Linux kernel 3.2.61 is an important release in the cycle, according to Hutchings. It introduces better support for the x86, ARM, PowerPC, s390 and MIPS architectures. At the same time, it also improves support for the EXT4, ReiserFS, Btrfs, nFS and UBIFS file systems. It also comes with updated drivers for wireless connectivity, InfiniBand, USB, ACPI, Bluetooth, SCSI, Radeon and Intel i915, among others.

Meanwhile, Linux founder Linus Torvalds has officially announced the fifth Release Candidate (RC) version of the upcoming Linux kernel 3.16. The RC5 is a successor to Linux 3.16-rc4. It is now available for download and testing. However, since it is a development version, it should not be installed on production machines.

Motorola brings out Android 4.4.4 KitKat upgrade for Moto E, Moto G and Moto XMotorola has unveiled the Android 4.4.4 KitKat update for its devices in India, for Moto E, Moto G and Moto X. This latest version of Android has an extra layer of security for browsing Web content on the phone.

With this phased rollout, users will receive notifications that will enable them to update their OS but, alternatively, the update can also be accessed by way of the settings menu. This release goes on to shore up Motorola’s commitment to offering its customers a pure, bloatware-free and seamless Android experience.

20 | aUGUST 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com

Page 21: Open Source to You - August 2014

standards than they did with proprietary technologies. This trend makes it even more critical to incorporate open source technologies in the college curriculum.

Speaking about the initiative, Venkatesh Swaminathan, country head, The Attachmate Group (Novell, NetIQ, SUSE and Attachmate), said, “This is one of the first

implementations of its kind but we do have engagements with universities on various other formats. Regarding this partnership with Karunya, we came out with a kind of a joint strategy to make engineering graduates ready for the jobs enterprises offer today. We thought about the current curriculum and how we could modify it to make it more effective. Our current education system places more emphasis on theory rather than the practical aspects of engineering. With our initiative, we aim to bring in more practical aspects into the curriculum. So we have looked at what enterprises want from engineers when they deploy some solutions. Today, though many enterprises want to use open source technologies effectively, the unavailability of adequate talent to handle those

technologies is a major issue. So, the idea was to bridge the gap between what enterprises want and what they are getting, with respect to the talent they require to implement and manage new technologies.”

Going forward, the company aims to partner with at least another 15 – 20 universities this year to integrate its courseware into the curriculum to benefit the maximum number of students in India. “The onus of ensuring that the technical and engineering students who graduate every year in our country are world-class and employable lies on both the academia as well as the industry. With this collaboration, we hope to take a small but important step towards achieving this objective,” Swaminathan added.

About The Attachmate GroupHeadquartered in Houston, Texas, The Attachmate Group is a privately-held software holding company, comprising distinct IT brands. Principal holdings include Attachmate, NetIQ, Novell and SUSE.

Out of the many interviews that we have conducted with recruiters asking them about what they look for in a candidate, one common requirement seems to

be knowledge of open source technology. As per NASSCOM reports, between 20 to 33 per cent of the million students that graduate out of India’s engineering colleges every year, run the risk of being unemployed.

The Attachmate Group, along with Karunya University, has taken a step forward to address this issue. Novell India, in association with Karunya University, has introduced Novell’s professional courses as part of the university’s curriculum. Students enrolled in the university’s M. Tech course for Information Technology will be offered industry-accepted courses. Apart from this, another company of the Attachmate Group, SUSE, has also pitched in to make the students familiar with the world of open source technology.

Speaking about the initiatives, Dr J Dinesh Peter, associate professor and HoD I/C, Department Of Information Technology, said, “We have already started with our first batch of students, who are learning SUSE. I think adding open source technology in the curriculum is a great idea because nowadays, most of the tech companies expect knowledge on open source technology for the jobs that they offer. Open source technology is the future, and I think all universities must have it incorporated in their curriculum in some form or the other.”

The university has also gone ahead to provide professional courses from Novell to the students. Dr Peter said, “In India, where the problem of employability of technical graduates is acute, this initiative could provide the much needed shot in the arm. We are pleased to be associated with Novell, which has offered its industry-relevant courses to our students. With growing competition and demand for skilled employees in the technology industry, it is imperative that the industry and academia work in sync to address the lacuna that currently exists in our system.”

Growth in the amount of open source software that enterprises use has been much faster than growth in proprietary software usage, over the past 2-3 years. One major reason for this is that open source technology helped companies slash huge IT budgets, while maintaining higher performance

SUSE Partners with Karunya University to Make Engineers Employable

The author is senior assistant editor at EFY.

By: Diksha P Gupta

In one of the very first initiatives of its kind, SUSE and Novell have partnered with Karunya University, Coimbatore, to ensure its students are industry-ready.

Venkatesh Swaminathan, country head, The Attachmate Group (Novell, NetIQ, SUSE and Attachmate)

In The News

www.OpenSourceForU.com | OPEN SOURCE FOR YOU | AUgUSt 2014 | 21

Page 22: Open Source to You - August 2014

SSDs Move Ahead to Overtake Hard Disk Drives

medium. Kingston offers an entire range of SSDs, including entry levels variants as well as options for general use.

There are a lot of factors to keep in mind when you are planning to buy an SSD—durability, portability, power consumption and speed. Gupta adds that, “The performance of SSDs is typically indicated by their IOPS (Input output operation per second), so one should look at the specifications of the product. Also, check the storage capacity. If you’re looking for an SSD when you already have a PC or laptop, then double check the compatibility between your system and the SSD you’ve shortlisted. If you’re buying a new system, then you can always check with the vendors as to what SSD options are available. Research the I/O speeds and get updates about how reliable the product is.”

“For PC users, some of the important performance parameters of SSDs are related to battery life, heating of the device and portability. An SSD is 100 per cent solid state technology and has no motor inside, so the advantage is that it consumes less energy; hence, it extends the battery life of the device and is quite portable,” explains Gupta.

Listed below are a few broad specifications of SSDs, which can help buyers decide which variant to go in for.

PortabilityPortability is one of the major concerns when buying an external hard drive because, as discussed earlier, everyone is gradually shifting to tablets, iPads and notebooks and so would not want to carry around an external hard disk that is heavier than the computing device. The overall portability of an SSD is evaluated on the basis of its size, shape, how much it weighs and its ruggedness.

High speedSpeed is another factor people look for, while buying an SSD. If it is not fast, it is not worth the buy. SSDs offer data transfer read speeds that range from approximately 530 MBps to 550 MBps, whereas a HDD offers only around 30 to 50 MBps. SSDs can also boot any operating system almost four times faster than a traditional 7200 RPM 500 GB hard drive disk. With SSDs, the applications provide a 12 times faster response compared to the HDD. A system equipped with an SSD also launches applications faster and offers a high performance overall.

High speed, durable and sleek SSDs are moving in to replace ‘traditional’ HDDs.

A solid state drive (SSD) is a data storage device that uses integrated circuit assemblies as its memory to store data. Now that everyone is

switching over to thin tablets and high performance notebooks, carrying heavy, bulky hard disks may be difficult. SSDs, therefore, play a vital role in today’s world as they combine high speed, durability and smaller sizes, with vast storage and power efficiency.

SSDs consume minimal power because they do not have any movable parts inside, which leads to less consumption of internal power.

HDDs vs SSDsThe new technologies embedded in SSDs make them costlier than HDDs. “SSDs, with their new technology, will gradually overtake hard disk drives (HDDs), which have been around ever since PCs came into prominence. It takes time for a new technology to completely take over the traditional one. Also, new technologies are usually expensive. However, users are ready to pay a little more for a new technology because it offers better performance,” explains Rajesh Gupta, country head and director, Sandisk Corporation India.

SSDs use integrated circuit assemblies as memory for storing data. The technology uses an electronic interface which is compatible with traditional block input/output HDDs. So SSDs can easily replace HDDs in commonly used applications.

An SSD uses a flash-based medium for storage. It is believed to have a longer life than an HDD and also consumes less power. “SSDs are the next stage in the evolution of PC storage. They run faster, and are quieter and cooler than the aging technology inside hard drives. With no moving parts, SSDs are also more durable and reliable than hard drives. They not only boost the performance but can also be used to breathe new life into older systems,” says Vishal Parekh, marketing director, Kingston Technology India.

How to select the right SSD If you’re a videographer, or have a studio dedicated to audio/video post-production work, or are in the banking sector, you can look at ADATA’s latest launch, which has been featured later in the article. Kingston, too, has introduced SSDs for all possible purposes. SSDs are great options even for gamers, or those who want to ensure their data has been saved in a secure

Buyers’ Guide

22 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

Page 24: Open Source to You - August 2014

DurabilityAs an SSD does not have any moving parts like a motor and uses a flash-based medium for storing data, it is more likely to keep the data secure and safe. Some of the SSDs are coated with metal, which extends their life. There are almost no chances of their getting damaged. Even if you drop your laptop or PC, the data stays safe and does not get affected.

Power consumptionIn comparison to a HDD, a solid state drive consumes minimal power. “Usually, a PC user faces the challenge of a limited battery life. But since an SSD is 100 per cent solid state technology and has no motor inside, it consumes less energy; hence, it extends the life of the battery and the PC,” adds Rajesh Gupta.

There are plenty of other reasons for choosing a SSD over a HDD. These include the warranty, cost, efficiency, etc. “Choosing a SSD can save you the cost of buying a

new PC by reviving the system you already own,” adds Parekh.

A few options to choose fromMany companies, including Kingston, ADATA and Sandisk, have launched their SSDs and it is quite a task trying to choose the best among them. Kingston has always stood out in terms of delivering good products to not just the Indian market but worldwide. Ashu Mehrotra, marketing manager, ADATA, speaks about his firm’s SSDs: “ADATA has been putting a lot of resources into R&D for SSDs, because of which its products provide unique advantages to customers.” Gupta says, “Sandisk is a completely vertically integrated solutions provider and is also a key manufacturer of flash-based storage systems, which are required for SSDs. Because of this, we are very conscious about the categories to be used in the SSD. We also make our own controllers and do our own integration.”

HyperX Fury from Kingston Technology• It is a 6.35 cm and 7 mm solid state drive (SSD) • Delivers impressive performance at an affordable price • It speeds up system boot up, application loading time and

file execution• Controller: SandForce SF-2281• Performance: SATA Rev 3.0 (6 GBps) • Read/write speed: 500 MBps to boost overall system

responsiveness and performance • Reliability: Cool, rugged and durable drive to push your

system to the limits • Warranty: Three years

Extreme PRO SSD from Sandisk � Consistently fast data transfer speeds � Lower latency times � Reduces power consumption � Comes in the following capacities: 64 GB, 128

GB and 256 GB � Speed: 520 MBps � Compatibility: SATA Revision 3.0 (6 GBps) � Warranty: Three years

Buyers’ Guide

24 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

Page 25: Open Source to You - August 2014

SSD 840 EVO from Samsung

1200 SSD from Seagate

� It is designed to meet the high-performance requirements of multimedia file transfers

� It provides up to 7 per cent more space on its SSD due to the right combination of controller and high quality flash

� It weighs 70 grams and its dimensions are 100×69.85×7 mm � Controller: Marvell � It comes in the following capacities: 128 GB, 256 GB, 512

GB and 1 TB � NAND flash synchronous MLC � Interface: SATA 6 GBps � Read/write speed: From 560 to 180 MBps � Power consumption: 0.067 W idle/0.15 W active

� Capacity: 500 GB (1 GB =1 billion bytes) � Dimensions: 100 x 69.85 x 6.80 mm � Weight: Max 5.3 kg � Interface: SATA 6 GBps (compatible with SATA 3

GBps and SATA 1.5 GBps) � Controller: Samsung 3-core MEX controller � Warranty: Three years

� It is designed for applications demanding the fast, consistent performance and has dual port 12 GBps SAS

� It comes with 800 GB capacity � Random read/write performance of up to 110K /40K IOPS � Sequential read/write performance from 500 MBps to

750 MBps

Premier Pro SP920 from ADATA

The author is a part of the editorial team at EFY.

By: Manvi Saxena

With inputs from ADATA, Kingston and Sandisk.

Buyers’ Guide

www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 25

Page 26: Open Source to You - August 2014

Developers How To

allowing communication and data sharing between processes through inter-process communication (IPC). Additionally, with the help of the process scheduler, it schedules processes and enables resource sharing.

Memory management: This subsystem handles all memory related requests. Available memory is divided into chunks of a fixed size called ‘pages’, which are allocated or de-allocated to/from the process, on demand. With the help of the memory management unit (MMU), it maps the process’ virtual address space to a physical address space and creates the illusion of a contiguous large address space.

File system: The GNU/Linux system is heavily dependent on the file system. In GNU/Linux, almost everything is a file. This subsystem handles all storage related requirements like the creation and deletion of files, compression and journaling of data, the organisation of data in a hierarchical manner, and so on. The Linux kernel supports all major file systems including MS Windows’ NTFS.

Have you ever wondered how a computer manages the most complex tasks with such efficiency and accuracy? The answer is, with the help of the operating system. It

is the operating system that uses hardware resources efficiently to perform various tasks and ultimately makes life easier. At a high level, the OS can be divided into two parts—the first being the kernel and other is the utility programs. Various user space processes ask for system resources such as the CPU, storage, memory, network connectivity, etc, and the kernel services these requests. This column will explore loadable kernel modules in GNU/Linux.

The Linux kernel is monolithic, which means that the entire OS runs solely in supervisor mode. Though the kernel is a single process, it consists of various subsystems and each subsystem is responsible for performing certain tasks. Broadly, any kernel performs the following main tasks.

Process management: This subsystem handles the process’ ‘life-cycle’. It creates and destroys processes,

This article provides an introduction to the Linux kernel, and demonstrates how to write and compile a module.

An Introduction to the Linux Kernel

26 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 27

Page 28: Open Source to You - August 2014

Developers How To

Device control: Any computer system requires various devices. But to make the devices usable, there should be a device driver and this layer provides that functionality. There are various types of drivers present, like graphics drivers, a Bluetooth driver, audio/video drivers and so on.

Networking: Networking is one of the important aspects of any OS. It allows communication and data transfer between hosts. It collects, identifies and transmits network packets. Additionally, it also enables routing functionality.

Dynamically loadable kernel modulesWe often install kernel updates and security patches to make sure our system is up-to-date. In case of MS Windows, a reboot is often required, but this is not always acceptable; for instance, the machine cannot be rebooted if is a production server. Wouldn’t it be great if we could add or remove functionality to/from the kernel on-the-fly without a system reboot? The Linux kernel allows dynamic loading and unloading of kernel modules. Any piece of code that can be added to the kernel at runtime is called a ‘kernel module’. Modules can be loaded or unloaded while the system is up and running without any interruption. A kernel module is an object code that can be dynamically linked to the running kernel using the ‘insmod’ command and can be unlinked using the ‘rmmod’ command.

A few useful utilitiesGNU/Linux provides various user-space utilities that provide useful information about the kernel modules. Let us explore them.

lsmod: This command lists the currently loaded kernel modules. This is a very simple program which reads the /proc/modules file and displays its contents in a formatted manner.

insmod: This is also a trivial program which inserts a module in the kernel. This command doesn’t handle module dependencies.

rmmod: As the name suggests, this command is used to unload modules from the kernel. Unloading is done only if the current module is not in use. rmmod also supports the -f or --force option, which can unload modules forcibly. But this option is extremely dangerous. There is a safer way to remove modules. With the -w or --wait option, rmmod will isolate the module and wait until the module is no longer used.

modinfo: This command displays information about the module that was passed as a command-line argument. If the argument is not a filename, then it searches the /lib/modules/<version> directory for modules. modinfo shows each attribute of the module in the field:value format.

Note: <version> is the kernel version. We can obtain it by executing the uname -r command.

dmesg: Any user-space program displays its output on the standard output stream, i.e., /dev/stdout but the kernel uses a different methodology. The kernel appends its output to the ring buffer, and by using the ‘dmesg’ command, we can

manage the contents of the ring buffer.

Preparing the systemNow it’s time for action. Let’s create a development environment. In this section, let’s install all the required packages on an RPM-based GNU/Linux distro like CentOS and a Debian-based GNU/Linux distro like Ubuntu.

Installing CentOSFirst install the gcc compiler by executing the following command as a root user:

[root]# yum -y install gcc

Then install the kernel development packages:

[root]# yum -y install kernel-devel

Finally, install the ‘make’ utility:

[root]# yum -y install make

Installing UbuntuFirst install the gcc compiler:

[mickey] sudo apt-get install gcc

After that, install kernel development packages:

[mickey] sudo apt-get install kernel-package

And, finally, install the ‘make’ utility:

[mickey] sudo apt-get install make

Our first kernel moduleOur system is ready now. Let us write the first kernel module. Open your favourite text editor and save the file as hello.c with the following contents:

#include <linux/kernel.h>

#include <linux/module.h>

int init_module(void)

{

printk(KERN_INFO “Hello, World !!!\n”);

return 0;

}

void cleanup_module(void)

{

printk(KERN_INFO “Exiting ...\n”);

}

28 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 29

Page 29: Open Source to You - August 2014

DevelopersHow To

MODULE_LICENSE(“GPL”);

MODULE_AUTHOR(“Narendra Kangralkar.”);

MODULE_DESCRIPTION(“Hello world module.”);

MODULE_VERSION(“1.0”);

Any module must have at least two functions. The first is initialisation and the second is the clean-up function. In our case, init_module() is the initialisation function and cleanup_module() is the clean-up function. The initialisation function is called as soon as the module is loaded and the clean-up function is called just before unloading the module. MODULE_LICENSE and other macros are self-explanatory.

There is a printk() function, the syntax of which is similar to the user-space printf() function. But unlike printf() , it doesn’t print messages on a standard output stream; instead, it appends messages into the kernel’s ring buffer. Each printk() statement comes with a priority. In our example, we used the KERN_INFO priority. Please note that there is no comma (,) between ‘KERN_INFO’ and the format string. In the absence of explicit priority, DEFAULT_MESSAGE_LOGLEVEL priority will be used. The last statement in init_module() is return 0 which indicates success.

The names of the initialisation and clean-up functions are init_module() and cleanup_module() respectively. But with the new kernel (>= 2.3.13) we can use any name for the initialisation and clean-up functions. These old names are still supported for backward compatibility. The kernel provides module_init and module_exit macros, which register initialisation and clean-up functions. Let us rewrite the same module with names of our own choice for initialisation and cleanup functions:

#include <linux/kernel.h>

#include <linux/module.h>

static int __init hello_init(void)

{

printk(KERN_INFO “Hello, World !!!\n”);

return 0;

}

static void __exit hello_exit(void)

{

printk(KERN_INFO “Exiting ...\n”);

}

module_init(hello_init);

module_exit(hello_exit);

MODULE_LICENSE(“GPL”);

MODULE_AUTHOR(“Narendra Kangralkar.”);

MODULE_DESCRIPTION(“Hello world module.”);

MODULE_VERSION(“1.0”);

Here, the __init and __exit keywords imply initialisation and clean-up functions, respectively.

Compiling and loading the moduleNow, let us understand the module compilation procedure. To compile a kernel module, we are going to use the kernel’s build system. Open your favourite text editor and write down the following compilation steps in it, before saving it as Makefile. Please note that the kernel modules hello.c and Makefile must exist in the same directory.

obj-m += hello.o

all:

make -C /lib/modules/$(shell uname -r)/build M=$(PWD)

modules

clean:

make -C /lib/modules/$(shell uname -r)/build M=$(PWD)

clean

To build modules, kernel headers are required. The above makefile invokes the kernel’s build system from the kernel’s source and finally the kernel’s makefile invokes our Makefile to compile the module. Now that we have everything to build our module, just execute the make command, and this will compile and create the kernel module named hello.ko:

[mickey] $ ls

hello.c Makefile

[mickey]$ make

make -C /lib/modules/2.6.32-358.el6.x86_64/build M=/home/

mickey modules

make[1]: Entering directory `/usr/src/kernels/2.6.32-358.

el6.x86_64’

CC [M] /home/mickey/hello.o

Building modules, stage 2.

MODPOST 1 modules

CC /home/mickey/hello.mod.o

LD [M] /home/mickey/hello.ko.unsigned

NO SIGN [M] /home/mickey/hello.ko

make[1]: Leaving directory `/usr/src/kernels/2.6.32-358.el6.

x86_64’

[mickey]$ ls

hello.c hello.ko hello.ko.unsigned hello.mod.c hello.

mod.o hello.o Makefile modules.order Module.symvers

We have now successfully compiled our first kernel

28 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 29

Page 30: Open Source to You - August 2014

Developers How To

module. Now, let us look at how to load and unload this module in the kernel. Please note that you must have super-user privileges to load/unload kernel modules. To load a module, switch to the super-user mode and execute the insmod command, as shown below:

[root]# insmod hello.ko

insmod has done its job successfully. But where is the output? It is appended to the kernel’s ring buffer. So let’s verify it by executing the dmesg command:

[root]# dmesg

Hello, World !!!

We can also check whether our module is loaded or not. For this purpose, let’s use the lsmod command:

[root]# lsmod | grep hello

hello 859 0

To unload the module from the kernel, just execute the rmmod command as shown below and check the output of the dmesg command. Now, dmesg shows the message from the clean-up function:

[root]# rmmod hello

[root]# dmesg

Hello, World !!!

Exiting ...

In this module, we have used a couple of macros, which provide information about the module. The modinfo command displays this information in a nicely formatted fashion:

[mickey]$ modinfo hello.ko

filename: hello.ko

version: 1.0

description: Hello world module.

author: Narendra Kangralkar.

license: GPL

srcversion: 144DCA60AA8E0CFCC9899E3

depends:

vermagic: 2.6.32-358.el6.x86_64 SMP mod_unload

modversions

Finding the PID of a processLet us write one more kernel module to find out the Process ID (PID) of the current process. The kernel stores all process related information in the task_struct structure, which is defined in the <linux/sched.h> header file. It provides a current variable, which is a pointer to the current process. To find out the PID of a current process, just print the value

of the current->pid variable. Given below is the complete working code (pid.c):

#include <linux/kernel.h>

#include <linux/module.h>

#include <linux/sched.h>

static int __init pid_init(void)

{

printk(KERN_INFO “pid = %d\n”, current->pid);

return 0;

}

static void __exit pid_exit(void)

{

/* Don’t do anything */

}

module_init(pid_init);

module_exit(pid_exit);

MODULE_LICENSE(“GPL”);

MODULE_AUTHOR(“Narendra Kangralkar.”);

MODULE_DESCRIPTION(“Kernel module to find PID.”);

MODULE_VERSION(“1.0”);

The Makefile is almost the same as the first makefile, with a minor change in the object file’s name:

obj-m += pid.o

all:

make -C /lib/modules/$(shell uname -r)/build M=$(PWD)

modules

clean:

make -C /lib/modules/$(shell uname -r)/build M=$(PWD)

clean

Now compile and insert the module and check the output using the dmesg command:

[mickey]$ make

[root]# insmod pid.ko

[root]# dmesg

pid = 6730

A module that spans multiple filesSo far we have explored how to compile a module from a single file. But in a large project, there are several source files for a single module and, sometimes, it is convenient to

30 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 31

Page 31: Open Source to You - August 2014

DevelopersHow To

divide the module into multiple files. Let us understand the procedure of building a module that spans two files. Let’s divide the initialization and cleanup functions from the hello.c file into two separate files, namely startup.c and cleanup.c.

Given below is the source code for startup.c:

#include <linux/kernel.h>

#include <linux/module.h>

static int __init hello_init(void)

{

printk(KERN_INFO “Function: %s from %s file\n”, __func__,

__FILE__);

return 0;

}

module_init(hello_init);

MODULE_LICENSE(“GPL”);

MODULE_AUTHOR(“Narendra Kangralkar.”);

MODULE_DESCRIPTION(“Startup module.”);

MODULE_VERSION(“1.0”);

And “cleanup.c” will look like this.

#include <linux/kernel.h>

#include <linux/module.h>

static void __exit hello_exit(void)

{

printk(KERN_INFO “Function %s from %s file\n”, __func__,

__FILE__);

}

module_exit(hello_exit);

MODULE_LICENSE(“BSD”);

MODULE_AUTHOR(“Narendra Kangralkar.”);

MODULE_DESCRIPTION(“Cleanup module.”);

MODULE_VERSION(“1.1”);

Now, here is the interesting part -- Makefile for these modules:

obj-m += final.o

final-objs := startup.o cleanup.o

all:

make -C /lib/modules/$(shell uname -r)/build M=$(PWD)

modules

clean:

make -C /lib/modules/$(shell uname -r)/build M=$(PWD)

clean

The Makefile is self-explanatory. Here, we are saying: “Build the final kernel object by using startup.o and cleanup.o.” Let us compile and test the module:

[mickey]$ ls

cleanup.c Makefile startup.c

[mickey]$ make

Then, let’s display module information using the modinfo command:

[mickey]$ modinfo final.ko

filename: final.ko

version: 1.0

description: Startup module.

author: Narendra Kangralkar.

license: GPL

version: 1.1

description: Cleanup module.

author: Narendra Kangralkar.

license: BSD

srcversion: D808DB9E16AC40D04780E2F

depends:

vermagic: 2.6.32-358.el6.x86_64 SMP mod_unload

modversions

Here, the modinfo command shows the version, description, licence and author-related information from each module.

Let us load and unload the final.ko module and verify the output:

[mickey]$ su -

Password:

[root]# insmod final.ko

[root]# dmesg

Function: hello_init from /home/mickey/startup.c file

[root]# rmmod final

[root]# dmesg

Function: hello_init from /home/mickey/startup.c file

Function hello_exit from /home/mickey/cleanup.c file

Passing command-line arguments to the moduleIn user-space programs, we can easily manage command line arguments with argc/ argv. But to achieve the same functionality through modules, we have to put in more of an effort.

30 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 31

Page 32: Open Source to You - August 2014

Developers How To

To achieve command-line handling in modules, we first need to declare global variables and use the module_param() macro, which is defined in the <linux/moduleparam.h> header file. There is also the MODULE_PARM_DESC() macro which provides descriptions about arguments. Without going into lengthy theoretical discussions, let us write the code:

#include <linux/kernel.h>

#include <linux/module.h>

#include <linux/moduleparam.h>

static char *name = “Narendra Kangralkar”;

static long roll_no = 1234;

static int total_subjects = 5;

static int marks[5] = {80, 75, 83, 95, 87};

module_param(name, charp, 0);

MODULE_PARM_DESC(name, “Name of the a student”);

module_param(roll_no, long, 0);

MODULE_PARM_DESC(rool_no, “Roll number of a student”);

module_param(total_subjects, int, 0);

MODULE_PARM_DESC(total_subjects, “Total number of subjects”);

module_param_array(marks, int, &total_subjects, 0);

MODULE_PARM_DESC(marks, “Subjectwise marks of a student”);

static int __init param_init(void)

{

static int i;

printk(KERN_INFO “Name : %s\n”, name);

printk(KERN_INFO “Roll no : %ld\n”, roll_no);

printk(KERN_INFO “Subjectwise marks “);

for (i = 0; i < total_subjects; ++i) {

printk(KERN_INFO “Subject-%d = %d\n”, i + 1,

marks[i]);

}

return 0;

}

static void __exit param_exit(void)

{

/* Don’t do anything */

}

module_init(param_init);

module_exit(param_exit);

MODULE_LICENSE(“GPL”);

MODULE_AUTHOR(“Narendra Kangralkar.”);

MODULE_DESCRIPTION(“Module with command line arguments.”);

MODULE_VERSION(“1.0”);

After compilation, first insert the module without any arguments, which display the default values of the variable. But after providing command-line arguments, default values will be overridden. The output below illustrates this:

[root]# insmod parameters.ko

[root]# dmesg

Name : Narendra Kangralkar

Roll no : 1234

Subjectwise marks

Subject-1 = 80

Subject-2 = 75

Subject-3 = 83

Subject-4 = 95

Subject-5 = 87

[root]# rmmod parameters

Now, let us reload module with command-line arguments and verify the output.

[root]# insmod ./parameters.ko name=”Mickey” roll_no=1001

marks=10,20,30,40,50

[root]# dmesg

Name : Mickey

Roll no : 1001

Subjectwise marks

Subject-1 = 10

Subject-2 = 20

Subject-3 = 30

Subject-4 = 40

Subject-5 = 50

If you want to learn more about modules, the Linux kernel’s source code is the best place to do so. You can download the latest source code from https://www.kernel.org/. Additionally, there are a few good books available in the market like ‘Linux Kernel Development’ (3rd Edition) by Robert Love and ‘Linux Device Drivers’ (3rd Edition). You can also download the free book from http://lwn.net/Kernel/LDD3/.

By: Narendra Kangralkar

The author is a FOSS enthusiast and loves exploring anything related to open source. He can be reached at [email protected]

32 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | PB

Page 33: Open Source to You - August 2014

DevelopersInsight

frameworks like jQuery and writing numerous lines of code to do some fairly minor task.

For example, if one wants to write code to show the datepicker selection, on onclick event in plain Javascript, the flow is: 1. For onclick event create one div element.2. Inside that div, add content for dates, month and year.3. Add navigation for changing the months and year.4. Make sure that, on first client, div can be seen, and on

second client, div is hidden; and this should not affect any other HTML elements. Just creating a datepicker is a slightly more difficult task and if this needs to be implemented many times in the same page, it becomes more complex. If the code is not properly implemented, then making modifications can be a nightmare.This is where jQuery comes to our rescue. By using it, we

can show the datepicker as follows:

$(“#id”).datepicker();

That’s it! We can reuse the same code multiple times by

This article aims to explain how to use jQuery in a rapid and more sophisticated manner. Websites focus not only on backend functions like user

registration, adding new friends or validation, but also on how their Web pages will get displayed to the user, how their pages will behave in different situations, etc. For example, doing a mouse-over on the front page of a site will either show beautiful animations, properly formatted error messages or interactive hints to the user on what can be done on the site.

jQuery is a very handy, interactive, powerful and rich client-side framework built on JavaScript. It is able to handle powerful operations like HTML manipulation, events handling and beautiful animations. Its most attractive feature is that it works across browsers. When using plain JavaScript, one of the things we need to ensure is whether the code we write tends towards perfection. It should handle any exception. If the user enters an invalid type of value, the script should not just hang or behave badly. However, in my career, I have seen many junior developers using plain JavaScript solutions instead of rich

jQuery, the cross-platform JavaScript library designed to simplify the client-side scripting of HTML, is used by over 80 per cent of the 10,000 most popularly visited websites. jQuery is free open source software which has a wide range of uses. In this article, the author suggests some best practices for writing jQuery code.

Write Better jQuery Code for Your Project

PB | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 33

Page 34: Open Source to You - August 2014

Developers Insight

just changing the id(s); and without any kind of collision, we can show multiple datepickers in the same page. That is the beauty of jQuery. In short, by using it, we can focus more on the functionality of the system and not just on small parts of the system. And we can write more complex code like a rich text editor and lots of other operations. But if we write jQuery code without proper guidance and proper methodology, we end up writing bad code; and sometimes that can become a nightmare for other team members to understand and modify for minor changes.

Developers often make silly mistakes during jQuery code implementation. So, based on some silly mistakes that I have encountered, here are some general guidelines that every developer should keep in mind while implementing jQuery code.

General guidelines for jQuery1. Try to use ‘this’ instead of just using the id and class of

the DOM elements. I have seen that most developers are happy with just using $(‘#id’) or $(‘.class’) everywhere:

//What developers are doing:

$(‘#id’).click(function(){

var oldValue = $(‘#id’).val();

var newValue = (oldValue * 10) / 2;

$(‘#id’).val(newValue);

});

//What should be done: Try to use more $(this) in your code.

$(‘#id’).click(function(){

$(this).val(($(this).val() * 10) / 2);

});

2. Avoid conflicts: When working with a CMS like WordPress or Magento, which might be using other JavaScript frameworks instead of jQuery, you need to work with jQuery inside that CMS or project. Then use the noConflicts of jQuery.

var $abc = jQuery.noConflict();

$abc(‘#id’).click(function(){

//do something

});

3. Take care of absent elements: Make sure that the element on which your jQuery code is working/manipulating is not absent. If the element on which your code manipulates is dynamically added, then first find it, if that element is added on DOM.

$(‘#divId’).find(‘#someId’).length

This code returns 0 if there isn’t an element with ‘someId’ found; else it will return the total number of elements that are inside ‘divId’.

4. Use proper selectors and try to use more ‘find()’, because find can traverse DOM faster. For example, if we want to find content of #id3…

//demo code snippet

<div id=’#id1’>

<span id=’#id2’></span>

<div class=’divClass’>Here is the content.</div>

</div>

//developer generally uses

var content = $(‘#id1 .divClass’).html();

//the better way is [This is faster in execution]

var content = $(‘#id1’).find(‘div.divClass’).html();

5. Write functions wherever required: Generally, developers write the same code multiple times. To avoid this, we can write functions. To write functions, let’s find the block that will repeat. For example, if there is a validation of an entry for a text box and the same gets repeated for many similar text boxes, then we can write a function for the same. Given below is a simple example of a text box entry. If the value is left empty in the entry, then function returns 0; else, if the user has entered some value, then it should return the same value.

//Javascript

function doValidation(elementId){

//get value using elementId

//check and return value

}

//simple jQuery

$(“input[type=’text’]”).blur(function(){

//get value using $(this)

//check and return value

});

//best way to implement

//now you can use this function easily with click event also

$.doValidation = function(){

//get value

//check and return value

};

$(“input[type=’text’]”).blur($.doValidation);

6. Object organisation: This is another thing that each developer needs to keep in mind. If one bunch of variables is related to one task and another bunch of variables is related to another task, then get them better organised, as shown below:

34 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 35

Page 35: Open Source to You - August 2014

DevelopersInsight

//bad way

var disableTask1 = false;

var defaultTask1 = 5;

var pointerTask1 = 2;

var disableTask2 = true;

var defaultTask2 = 10;

var currentValueTask2 = 10;

//like that many other variables

//better way

var task1 = {

disable: false,

default: 5,

pointer: 2,

getNewValue: function(){

//do some thing

return task1.default + 5;

}

};

var task2 = {

disable: true,

default: 10,

currentValue: 10

};

//how to use them

if(task1.disable){

//do some thing…

return task1.default;

}

7. Use of callbacks: When multiple functions are used in your code and if the second function is dependent on the effects of the first output, then callbacks are required to be written.For example, task2 needs to be executed after

completion of task1, or in other words, you need to halt execution of task2 until task1 is executed. I have noticed that many developers are not aware of callback functions. So, they either initialise one variable for checking [like mutex in the operating system] or set a timeout for execution. Below, I have explained how easily this can be implemented using callback.

//Javascript way

task1(function(){

task1();

});

function task1(callback){

//do something

if (callback && typeof (callback) === “function”) {

callback();

}

}

function task2(callback){

//do something

if (callback && typeof (callback) === “function”) {

callback();

}

}

//Better jQuery way

$.task1 = function(){

//do something

};

$.task2 = function(){

//do something

};

var callbacks = $.Callbacks();

callbacks.add($.task1);

callbacks.add($.task2);

callbacks.fire();

8. Use of ‘each’ for iteration: The snippet below shows how each can be used for iteration.

var array;

//javascript way

var length = array.length;

for(var i =0; i<length; i++){

var key = array[i].key;

// like wise fetching other values.

}

//jQuery way

$.each(array, function(key, value){

alert(key);

});

9. Don't repeat code: Never write any code again and again. If you find yourself doing so, halt your coding and read the eight points listed above, all over again.Next time I’ll explain how to write more effective plugins,

using some examples.

By: Savan Koradia

The author works as a senior PHP Web developer at Multidots Solutions Pvt Ltd. He writes tutorials to help other developers to write better code. You can contact him at: [email protected]; Skype: savan.koradia.multidots

34 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 35

Page 36: Open Source to You - August 2014

Developers Let’s Try

Back Up a Shared Server in MongoDB

config servers store the same metadata and since we have three of them just to ensure availability, we’ll be backing just one config server for demonstration purposes. So open a command prompt and type the following command to back up the config database of our config server:

C:\Users\viny\Desktop\mongodb-win32-i386-2.6.0\

bin>mongodump --host localhost:59020 --db config

This command will dump your config database under the dump directory of your MongoDB root directory.

Now let’s back up our actual data by taking backups of all of our shards. Issue the following commands, one by one, and take a backup of all the three replica sets of both the shards that we configured earlier:

mongodump --host localhost:38020 --out .\shard1\replica1

mongodump --host localhost:38021 --out .\shard1\replica2

mongodump --host localhost:38022 --out .\shard1\replica3

mongodump --host localhost:48020 --out .\shard2\replica1

mongodump --host localhost:48021 --out .\shard2\replica2

mongodump --host localhost:48022 --out .\shard2\replica3

The --out parameter defines the directory where MongoDB will place the dumps. Now you can start the balancer by issuing the sh.startBalancer() command and resume normal operations. So we’re done with our backup operation.

If you want to explore a bit more about backups and restores in MongoDB, you can check MongoDB documentation and the article in http://www.thegeekstuff.com/2013/09/mongodump-mongorestore/ which will give you some good insights into Mongodump and Mongorestore commands.

In the previous article in this series, we set up a sharded environment in MongoDB. This article deals with one of the most intriguing and crucial topics in database

administration—backups. The article will demonstrate the MongoDB backup process and will make a backup of the sharded server that was configured earlier. So, to proceed, you must set up your sharded environment as per our previous article as we’ll be using the same configuration.

Before we move on with the backup, make sure that the balancer is not running. The balancer is the process that ensures that data is distributed evenly in a sharded cluster. This is an automated process in MongoDB and at most times, you won’t be bothered with it. In this case, though, it needs to be stopped so that no chunk migration takes place while we back up the server. If you’re wondering what the term ‘chunk migration’ means, let me tell you that if one shard in a sharded MongoDB environment has more data stored than its peers, then the balancer process migrates some data to other shards. Evenly distributed data ensures optimal performance in a sharded environment.

So now connect to a Mongo process by opening a command prompt, going to the MongoDB root directory and typing ‘Mongo’. Type sh.getBalancerState() to find out the balancer’s status. If you get true as the output, your balancer is running. Type sh.stopBalancer() to stop the balancer.

The next step is to back up the config server, which stores metadata about shards. In the previous article, we set up three config servers for our shard. Since all the

Continuing the series on MongoDB, in this article, readers learn how to set up a backup for the sharded environment that was set up over the previous two articles.

Figure 1: Balancer status

By: Vinayak Pandey

The author is an experienced database developer, with exposure to various database and data warehousing tools and techniques, including Oracle, Teradata, Informatica PowerCenter and MongoDB.

36 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | PB

Page 37: Open Source to You - August 2014

Sandya Mannarswamy

CODESPORT

36 | august 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | august 2014 | 37

For the past few months, we have been discussing information retrieval and natural language processing (NLP), as well as the

algorithms associated with them. In this month’s column, let’s continue our discussion on NLP while also covering an important NLP application called ‘Named Entity Recognition’ (NER). As mentioned earlier, given a large number of text documents, NLP techniques are employed to extract information from the documents. One of the most common sources of textual information is newspaper articles. Let us consider a simple example wherein we are given all the newspaper articles that appeared in the last one year. The task that is assigned to us is related to the world of business. We are asked to find out all the mergers and acquisitions of businesses. We need to extract information on which companies bought over other firms as well as the companies that merged with each other. Our first rudimentary steps towards getting this information will perhaps be to look for keyword-based searches that used terms such as ‘merger’ or ‘buys’. Once we find the sentences containing those keywords, we could then perhaps look for the names of the companies, if any occur in those sentences. Such a task requires us to identify all company names present in the document.

For a person reading the newspaper article, such a task seems simple and straightforward. Let us first try to list down the ways in which a human being would try to identify the company names that could be present in a text document. We need to use heuristics such as: (a) Company names typically would begin with capital letters; (b) They can contain words such as ‘Corporation’ or ‘Ltd’; (c) They can be represented by letters of the alphabet separated by full stops, such as I.B.M. We could also use contextual clues such as ‘X’s stock price went up’ to infer that X is a business or company. Now, the question we are left with is whether it is possible

to convert what constitutes our intuitive knowledge about how to look for a company’s name in a text document into rules that can be automatically checked by a program. This is the task that is faced by NLP applications which try to do Named Entity Recognition (NER). The point to note is that while the simple heuristics we use to identify names of companies does work well in many cases, it is also quite possible that it misses out extracting names of companies in certain other cases. For instance, consider the possibility of the company’s name being represented as IBM instead of I.B.M, or as International Business Machines. The rule-based system could potentially miss out recognising it. Similarly, consider a sentence like, “Indian Oil and Natural Gas Company decided that…” In this case, it is difficult to figure out whether there are two independent entities, namely, ‘Indian Oil’ and ‘Natural Gas Company’ being referred to in the sentence or if it is a single entity whose name is ‘Indian Oil and Natural Gas Company’. It requires considerable knowledge about the business world to resolve the ambiguity. We could perhaps consult the ‘World Wide Web’ or Wikipedia to clear our doubts. The use of such sources of knowledge is quite common in Named Entity Recognition (NER) systems. Now let us look a bit deeper into NER systems and their uses.

Types of entitiesWhat are the types of entities that are of interest to a NER system? Named entities are by definition, proper nouns, i.e., nouns that refer to a particular person, place, organisation, thing, date or time, such as Sandya, Star Wars, Pride and Prejudice, Cubbon Park, March, Friday, Wipro Ltd, Boy Scouts, and the Statue of Liberty. Note that a named entity can span more than one word, as in the case of ‘Cubbon Park’. Each of these entities are assigned different tags such

This month’s column continues the discussion of natural language processing.

Page 40: Open Source to You - August 2014

40 | august 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | august 2014 | PB

as Person, Company, Location, Month, Day, Book, etc. If the above example is tagged with entities, it will be tagged as <Person> Sandya </Person>, <Movie>Star Wars</Movie>, <Book> Pride and Prejudice </Book>, <Location> Cubbon Park </Location> , etc.

It is not only important that the NER system recognises a phrase correctly as an entity but also that it labels it with the right entity type. Consider the sentence, “Washington Jr went to school in England, but for graduate studies, he moved to the United States and studied at Washington.” This sentence contains two references to the noun ‘Washington’, one as a person: ‘Washington Jr’ and another as a location: ‘Washington, United States’. While it may appear that if an NER system has a list of all pronouns, it can correctly extract all entities, in reality, this is not true. Consider the two sentences, “Jobs are hard to find…” and “Jobs said that the employment rate is picking up..” Even if the NER system has an exhaustive list of pronouns, it needs to figure out that the word ‘Jobs’ appearing in the first sentence does not refer to an entity, whereas the reference ‘Jobs’ in the second sentence is an entity.

Given our discussion so far, it is clear to us that NER systems can be built in a number of ways, though no single method can be considered to be superior to others and a combination of techniques is needed. We saw that rule-based NER systems tend to be incomplete and have the disadvantage of requiring manual extension quite frequently. Rule-based systems use typical pattern matching techniques to identify the entities. On the other hand, it is possible to extract features associated with named entities and use them to train classifiers that can tag entities, using machine learning techniques. Machine learning approaches for identifying entities can be based on: (a) supervised learning techniques; (b) semi-supervised learning techniques; and (c) unsupervised learning techniques.

The third kind of NER systems can be based on gazetteers, wherein a lexicon or gazette for names is constructed and made available to the NER system which then tags the text, identifying entities in the text based on the lexicon entries. Once a gazetteer is available, all that the NER needs to do is to have an efficient lookup in the gazetteer for each phrase it identifies in the text, and tag it based on the information it finds in the gazette. A gazette can also help to embed external world information, which can help in name entity resolution. But first, the gazette needs to be built for it to be available to the NER system. Building a gazette can consume considerable manual effort. One of the alternatives is to build the lexicon or gazetteer itself through automatic means, which brings us back to the problem of recognising named entities automatically from various document sources. Typically, external world sources such as Wikipedia or Twitter can be used as the information sources from which the gazette can be built. Sometimes a combination of approaches can be used with a lexicon, in conjunction with a rules-based or machine learning approach.

While rule-based NER systems and gazetteer approaches work well for a domain-specific NER, machine learning approaches generally perform well when applied across multiple domains. Many of the machine learning based approaches use supervised learning techniques, by which a large corpus of text is annotated manually with named entities and the goal is to use the annotated data to train the learner. These systems use statistical models and some form of feature identification to make predictions about named entities in unlabelled text, based on what they have learnt from the annotated text. Typically, supervised learning systems study the features of positive and negative examples, which have been tagged as named entities in the hand-annotated training set. They use that information to either come up with statistical models, which can predict whether a newly encountered phrase is a named entity or not. If it is a named entity, supervised learning systems predict its type as well. In the next column, we will continue our discussion on how hidden Markov models and maximum entropy models can be used to construct learner systems.

My ‘must-read book’ for this monthThis month’s book suggestion comes from one of our readers, Jayshankar, and his recommendation is very appropriate for this month’s column. He recommends an excellent resource for text mining—a book called ‘Taming Text’ by Ingersol, Morton and Farris. The book describes different algorithms for text search, text clustering and classification. There is also a detailed chapter on Named Entity Recognition, which will be useful supplementary reading for this month’s column. Thank you, Jay, for sharing this book link.

If you have a favourite programming book or article that you think is a must-read for every programmer, please do send me a note with the book’s name, and a short write-up on why you think it is useful, so I can mention it in the column. This would help many readers who want to improve their software skills.

If you have any favourite programming questions or software topics that you would like to discuss on this forum, please send them to me, along with your solutions and feedback, at sandyasm_AT_yahoo_DOT_com. Till we meet again next month, happy programming!

The author is an expert in systems software and is currently working with Hewlett Packard India Ltd. Her interests include compilers, multi-core and storage systems. If you are preparing for systems software interviews, you may find it useful to visit Sandya's LinkedIn group ‘Computer Science Interview Training India’ at http://www.linkedin.com/groups?home=HYPERLINK "http://www.linkedin.com/groups?home=&gid=2339182"&HYPERLINK "http://www.linkedin.com/groups?home=&gid=2339182"gid=2339182

By: Sandya Mannarswamy

Page 42: Open Source to You - August 2014

Guest ColumnExploring Software

42 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 43

OpenStack is a worldwide collaboration between developers and cloud computing technologists aimed at developing the cloud computing platform for public and private clouds. Let’s install it on our desktop.

Big Data on a Desktop: A Virtual Machine in an OpenStack Cloud

Installing OpenStack using Packstack is very simple. After a test installation in a virtual machine, you will find that the basic operations for creating and using virtual

machines are now quite simple when using a Web interface.

The environmentIt is important to understand the virtual environment. While everything is running on a desktop, the setup consists of multiple logical networks interconnected via virtual routers and switches. You need to make sure that the routes are defined properly because otherwise, you will not be able to access the virtual machines you create.

On the desktop, the virt-manager creates a NAT-based network by default. NAT assures that if your desktop can access the Internet, so can the virtual machine. The Internet access had been used when the OpenStack distribution was installed in the virtual machine.

The Packstack installation process creates a virtual public network for use by the various networks created within the cloud environment. The virtual machine on which OpenStack is installed is the gateway to the physical network.

Virtual Network on the Desktop (virbr0 interface):

192.168.122.0/32

IP address of eth0 interface on OpenStack VM: 192.168.122.54

Public Virtual Network created by packstack on OpenStack VM:

172.24.4.224/28

IP address of the br-ex interface OpenStack VM: 172.24.4.225

Testing the environmentIn the OpenStack VM console, verify the network addresses. In my case, I had to explicitly give an ip to the br-ex interface, as follows:

# ifconfig

# ip addr add 172.24.4.225/28 dev br-ex

On the desktop, add a route to the public virtual network on OpenStack VM:

# route add -net 172.24.4.224 netmask 255.255.255.240 gw

192.168.122.54

Now, browse http://192.168.122.54/dashboard and create a new project and a user associated with the project. 1. Sign in as the admin.2. Under the Identity panel, create a user (youser) and

a project (Bigdata). Sign out and sign in as youser to create and test a cloud VM.

3. Create a private network for the project under Project/Network/Networks: • Create the private network 192.168.10.0/24 with

the gateway 192.168.10.254• Create a router and set a gateway to the public

network. Add an interface to the private network and ip address 192.168.10.254.

4. To be able to sign in using ssh, under the Project/Compute/Access & Security, in the Security Groups tab, add the following rules to the default security group:• Allow ssh access: Custom TCP Rule for allowing

traffic on Port 22. • Allow icmp access: Custom ICMP Rule with

Type and Code value -1. 5. For password-less signing into the VM, under the

Project/Compute/Access & Security, in the Key Pairs tab the following:• Select the Import Key Pair option and give it a

name, e.g., ‘desktop user login’.• In your desktop terminal window, use ssh-keygen

to create a public/private key pair in case you don't already have one.

• Copy the contents of ~/.ssh/id_rsa.pub from your desktop account and paste them in the public key.

6. Allocate a public IP for accessing the VM under Project/Compute/Access & Security in the Floating Ips tab, and allocate IP to the project. You may get a value like 172.24.4.229

7. Now launch the instance under Project/Compute/Instance:

Anil Seth

Page 43: Open Source to You - August 2014

Guest Column Exploring Software

42 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 43

The author has earned the right to do what interests him. You can find him online at http://sethanil.com, http://sethanil.blogspot.com, and reach him via email at [email protected]

By: Dr Anil Seth

• Give it a name - test and choose the m1-tiny flavour.

• Select the boot source as ‘Boot from image' with the image name ‘cirros', a very small image included in the installation.

• Once it is launched, associate the floating ip obtained above with this instance.

Now, you are ready to log in to the VM created in your local cloud. In a terminal window, type:

ssh [email protected]

You should be signed into the virtual machine without needing a password.

You can experiment with importing the Fedora VM image you used for the OpenStack VM and launching it in the cloud. Whether you succeed or not will depend on the resources available in the OpenStack VM.

Installing only the needed OpenStack services You will have observed that OpenStack comes with a very wide range of services, some of which are not likely to be very useful for your experiments on the desktop, e.g., the additional networks and router created in the tests above. Here is a part of the dialogue for installing the required services on the desktop:

[root@amd ~]# packstack

Welcome to Installer setup utility

Enter the path to your ssh Public key to install on

servers:

Packstack changed given value to required value /root/.

ssh/id_rsa.pub

Should Packstack install MySQL DB [y|n] [y] : y

Should Packstack install OpenStack Image Service (Glance)

[y|n] [y] : y

Should Packstack install OpenStack Block Storage (Cinder)

service [y|n] [y] : n

Should Packstack install OpenStack Compute (Nova) service

[y|n] [y] : y

Should Packstack install OpenStack Networking (Neutron)

service [y|n] [y] : n

Should Packstack install OpenStack Dashboard (Horizon)

[y|n] [y] : y

Should Packstack install OpenStack Object Storage (Swift)

[y|n] [y] : n

Should Packstack install OpenStack Metering (Ceilometer)

[y|n] [y] : n

Should Packstack install OpenStack Orchestration (Heat)

[y|n] [n] : n

Should Packstack install OpenStack client tools [y|n] [y]

: y

The answers to the other questions will depend on the network interface and the IP address of your desktop, but there is no ambiguity here. You should answer with the interface ‘lo' for CONFIG_NOVA_COMPUTE_PRIVIF and CONFIG_NOVA_NETWORK_PRIVIF. You don't need an extra physical interface as the compute services are running on the same server.

Now, you are ready to test your OpenStack installation on the desktop. You may want to create a project and add a user to the project. Under Project/Compute/Access & Security, you will need to add firewall rules and key pairs, as above.

However, you will not need to create any additional private network or a router.

Import a basic cloud image, e.g., from http://fedoraproject.org/get-fedora#clouds under Project/Compute/Images.

You may want to create an additional flavour for a virtual machine. The m1.tiny flavour has 512MB of RAM and 4GB of disk and is too small for running Hadoop. The m1.small flavour has 2GB of RAM and 20GB of disk, which will restrict the number of virtual machines you can run for testing Hadoop. Hence, you may create a mini flavour with 1GB of RAM and 10GB of disk. This will need to be done as the admin user.

Now, you can create an instance of the basic cloud image. The default user is fedora and your setup is ready for exploration of Hadoop data.

Figure 1: Simplified network diagram

em1 virbr0 eth0

eth0

Internet

VM

Desktop

Router

OpenStack VM

br-ext

Page 44: Open Source to You - August 2014

Developers Let's Try

HistoryIn 2008, Sun Microsystems bought MySQL for US$ 1 billion. But the original developer, Monty Widenius, was quite disappointed with the way things were run at Sun and founded his own new company and his own fork of MySQL - MariaDB. It is named after Monty's younger daughter, Maria. Later, when Oracle announced the acquisition of Sun, most of the MySQL developers jumped to its forks: MariaDB and Drizzle.

MariaDB version numbers follow MySQL numbers till 5.5. Thus, all the features in MySQL are available in MariaDB. After MariaDB 5.5, its developers started a new branch numbered MariaDB 10.0, which is the development version of MariaDB. This was done to make it clear that MariaDB 10.0 will not import all the features from MySQL 5.6. Also, at times, some of these features do not seem to be solid enough for MariaDB’s standards. Since new specific features have been developed in MariaDB, the team decided to go for a major version number. The currently used version, MariaDB 10.0, is built on the MariaDB 5.5 series and has back ported features from MySQL 5.6 along with entirely new developments.

MariaDB is a high performance, open source database that helps the world's busiest websites deliver more content, faster. It has been created

by the developers of MySQL with the help of the FOSS community and is a fork of MySQL. It offers various features and enhancements like alternate storage engines, server optimisations and patches.

The lead developer of MariaDB is Michael ‘Monty’ Widenius, who is also the founder of MySQL and Monty Program AB.

No single person or company nurtures MariaDB/MySQL development. The guardian of the MariaDB community, the MariaDB Foundation, drives it. It states that it has the trademark of the MariaDB server and owns mariadb.org, which ensures that the official MariaDB development tree is always open to the developer community. The MariaDB Foundation assures the community that all the patches, as well as MySQL source code, are merged into MariaDB. The Foundation also provides a lot of documentation. MariaDB is a registered trademark of SkySQL Corporation and is used by the MariaDB Foundation with permission. It is a good choice for database professionals looking for the best and most robust SQL server.

that Google has AdoptedThe MySQL Fork MariaDB

MariaDB is a community developed fork of MySQL, which has overtaken MySQL. That many leading corporations in the cyber environment, including Google, have migrated to MariaDB speaks for its importance as a player in the database firmament.

44 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 45

Page 45: Open Source to You - August 2014

DevelopersLet's Try

Why MariaDB is better than MySQLWhen comparing MariaDB and MySQL, we are comparing different development cultures, features and performance. The patches developed by MariaDB focus on bug fixing and performance. By supporting the features of MySQL, MariaDB implements more improvements and delivers better performance without restrictions on compatibility with MySQL. It also provides more storage engines than MySQL. What makes MariaDB different from MySQL is better testing, fewer bugs and fewer warnings. The goal of MariaDB is to be a drop-in replacement for MySQL, with better developments.

Navicat is a strong and powerful MariaDB administration and development tool. It is graphic database management and development software produced by PremiumSoft CyberTech Ltd. It provides a native environment for MariaDB database management and supports the extra features like new storage engines, microsecond and virtual columns.

It is easy to convert from MySQL to MariaDB, as we need not convert any data and all our old connectors to other languages work unchanged. As of now MariaDB is capable of handling data in terabytes, but more needs to be done for it to handle data in petabytes.

Features Here is a list of features that MariaDB provides: � Since it has been released under the GPL version 2, it is free. � It is completely open source. � Open contributions and suggestions are encouraged. � MariaDB is one of the fastest databases available. � Its syntax is pretty simple, flexible and easy to manage. � It can be easily imported or exported from CSV and XML. � It is useful for both small as well as large databases,

containing billions of records and terabytes of data in hundreds of thousands of tables.

� MariaDB includes pre-installed storage engines like Aria, XtraDB, PBXT, FederatedX and SphinxSE.

� The use of the Aria storage engine makes complex queries faster. Aria is usually faster since it caches row data in memory and normally doesn't have to write the temporary rows to disk.

� Some storage engines and plugins are pre-installed in MariaDB.

� It has a very strong community.

Installing MariaDBNow let’s look at how MariaDB is installed.

Step 1: First, make sure that the required packages are installed along with the apt-get key for the MariaDB repository, by using the following commands:

$ sudo apt-get install software-properties-common

$ sudo apt-key –recv-keys –keyserver hkp://keyserver.ubuntu.

com:80 0xcbcb082a1bb943db

Now, add the apt-get repository as per your Ubuntu version.

For Ubuntu 13.10

$ sudo add-apt-repository 'deb http://ftp.kaist.ac.kr/

mariadb/repo/5.5/ubuntu saucy main'

For Ubuntu 13.04

$ sudo add-apt-repository 'deb http://ftp.kaist.ac.kr/

mariadb/repo/5.5/ubuntu raring main'

For Ubuntu 12.10

$ sudo add-apt-repository 'deb http://ftp.kaist.ac.kr/

mariadb/repo/5.5/ubuntu quantal main'

For Ubuntu 12.04 LTS

$ sudo add-apt-repository 'deb http://ftp.kaist.ac.kr/

mariadb/repo/5.5/ubuntu precise main'

Step 2: Install MariaDB using the following commands:

$ sudo apt-get update

$ sudo apt-get install mariadb-server

Provide the root account password as shown in Figure 1.Step 3: Log in to MariaDB using the following

command, after installation:

mysql -u root -p

Figure 1: Configuring MariaDB

Figure 2: Logging into MariaDB

44 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 45

Page 46: Open Source to You - August 2014

Developers Let's Try

Creating a database in MariaDBWhen entering the account administrator password set up during installation, you will be given a MariaDB prompt.

Create a database on students by using the following command:

CREATE DATABASE students;

Switch to the new database using the following command (this is to make sure that you are currently working on this database):

USE students;

Now that the database has been created, create a table:

CREATE TABLE details(student_id int(5) NOT NULL AUTO_

INCREMENT,

name varchar(20) DEFAULT NULL,

age int(3) DEFAULT NULL,

marks int(5) DEFAULT NULL,

PRIMARY KEY(student_i)d

);

To see what we have done, use the following command:

show columns in details;

Each column in the table creation command is separated by a comma and is in the following format:

Column_Name Data_Type[(size_of_data)] [NULL or NOT NULL]

[DEFAULT default_value]

[AUTO_INCREMENT]

These columns can be defined as: � Column Name: Describes the attribute being assigned. � Data Type: Specifies the type of data in the column. � Null: Defines whether null is a valid value for that field –-

it can be ‘null’or ‘not null’. � Default Value: Sets the initial value of all newly created

records that do not specify a value. � auto_increment: MySQL will handle the sequential

numbering of any column marked with this option, internally, in order to provide a unique value for each record.Ultimately, before closing the table definition,

we need to use the primary key by typing PRIMARY KEY(column name). It guarantees that this column will serve as a unique field.

Inserting data into a MariaDB tableTo insert data into a MariaDB table, use the following commands:

INSERT INTO details(name,age,marks) values ("anu",15,450);

INSERT INTO details(name,age,marks) VALUES("Bob",15,400);

The output will be as shown in Figure 4.

We need not add values in student_id. It is automatically incremented. All other values are given in quotes.

Deleting a tableTo delete a table, type the following command:

DROP TABLE table_name;

Once the table is deleted, the data inside it cannot be recovered.

We can view the current table using the show tables command, which gives all the tables inside the database:

SHOW tables;

After deleting the table, use the following commands:

DROP TABLE details;

Query OK, 0 rows affected (0.02 sec)

SHOW tables;

The output will be:

Empty set (0.00 sec)

Google waves goodbye to MySQLGoogle has now switched to MariaDB and dumped MySQL. “For the Web community, Google’s big move might be a paradigm shift in the DBMS ecosystem,” said a Google engineer. Major Linux distributions, like Red Hat and SUSE, and well-known websites such as Wikipedia, have also switched from MySQL to MariaDB. This is a great blow to MySQL.

Google has migrated applications that were previously

Figure 4: Inserting data into a table

Figure 3: A sample table created

46 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 47

Page 47: Open Source to You - August 2014

DevelopersLet's Try

running on MySQL on to MariaDB without changing the application code. There are five Google technicians working part-time on MariaDB patches and bug fixes, and Google continues to maintain its internal branch of MySQL to have complete control over the improvement. Google running thousands of MariaDB servers can only be good news for those who feel more comfortable with a non-Oracle future for MySQL.

Though multinational corporations like Google have switched to MariaDB, it does have a few shortcomings. MariaDB’s performance is slightly better in multi-core machines, but one suspects that MySQL could be tweaked to match the performance. All it requires is for Oracle to improve MySQL by adding some new features that are not

By: Amrutha S.

The author is currently studying for a bachelor’s degree in Computer Science and Engineering at Amrita University in Kerala, India. She is an open source enthusiast and also an active member of the Amrita FOSS club. She can be contacted at [email protected].

[1] http://en.wikipedia.org/wiki/MariaDB[2] https://mariadb.org/[3] http://tecadmin.net/install-mariadb-5-5-in-ubuntu/#[4] https://www.digitalocean.com/community/tutorials/how-

to-create-a-table-in-mysql-and-mariadb-onan-ubuntu-cloud-server

[5] http://en.wikibooks.org/wiki/MariaDB/Introduction

References

Figure 5: Tables in the database

present in MariaDB, yet. And then it will be difficult to switch back to the previous database.

MariaDB has the advantage of being bigger in terms of the number of users, than its forks and clones. MySQL took a lot of time and effort before emerging as the choice of many companies. So, it is a little hard to introduce MariaDB in the commercial field. Being a new open source standard, we can only hope that MariaDB will overtake other databases in a short span of time.

Please share your feedback/ thoughts/views via email at [email protected]

46 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 47

Page 48: Open Source to You - August 2014

Developers Let's Try

Haskell: The Purely Functional Programming Language

Loading package ghc-prim ... linking ... done.

Loading package integer-gmp ... linking ... done.

Loading package base ... linking ... done.

Prelude> :l Sum.hs

[1 of 1] Compiling Main ( Sum.hs, interpreted )

Ok, modules loaded: Main.

*Main> :t sumInt

sumInt :: Int -> Int -> Int

*Main> sumInt 2 3

5

If we check the type of sumInt with arguments, we get the following output:

*Main> :t sumInt 2 3

sumInt 2 3 :: Int

Consider the function sumInt to compute the sum of two integers. It is defined as:

sumInt :: Int -> Int -> Int

sumInt x y = x + y

The first line is the type signature in which the function name, arguments and return types are separated using a double colon (::). The arguments and the return types are separated by the symbol (->). Thus, the above type signature tells us that the sum function takes two arguments of type Int and returns an Int. Note that the function names must always begin with the letters of the alphabet in lower case. The names are usually written in CamelCase style.

You can create a Sum.hs Haskell source file using your favourite text editor, and load the file on to the Glasgow Haskell Compiler interpreter (GHCi) using the following code:

$ ghci

GHCi, version 7.6.3: http://www.haskell.org/ghc/ :? for help

Haskell, an open source programming language, is the outcome of 20 years of research. Named after the logician, Haskell Curry, it has all the advantages of functional programming and an intuitive syntax based on mathematical notation. This second article in the series on Haskell explores a few functions.

48 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 49

Page 49: Open Source to You - August 2014

DevelopersLet's Try

*Main> :t sumInt 2

sumInt 2 :: Int -> Int

The value of sumInt 2 3 is an Int as defined in the type signature. We can also partially apply the function sumInt with one argument and its return type will be Int -> Int. In other words, sumInt 2 takes an integer and will return an integer with 2 added to it.

Every function in Haskell takes only one argument. So, we can think of the sumInt function as one that takes an argument and returns a function that takes another argument and computes their sum. This return function can be defined as a sumTwoInt function that adds a 2 to an Int using the sumInt function, as shown below:

sumTwoInt :: Int -> Int

sumTwoInt x = sumInt 2 x

The ‘=’ sign in Haskell signifies a definition and not a variable assignment as seen in imperative programming languages. We can thus omit the ‘x' on either side and the code becomes even more concise:

sumTwoInt :: Int -> Int

sumTwoInt = sumInt 2

By loading Sum.hs again in the GHCi prompt, we get the following:

*Main> :l Sum.hs

[1 of 1] Compiling Main ( Sum.hs, interpreted )

Ok, modules loaded: Main.

*Main> :t sumTwoInt

sumTwoInt :: Int -> Int

*Main> sumTwoInt 3

5

Let us look at some examples of functions that operate on lists. Consider list ‘a', which is defined as [1, 2, 3, 4, 5] (a list of integers) in the Sum.hs file (re-load the file in GHCi before trying the list functions).

a :: [Int]

a = [1, 2, 3, 4, 5]

The head function returns the first element of a list:

*Main> head a

1

*Main> :t head

head :: [a] -> a

The tail function returns everything except the first element

from a list:

*Main> tail a

[2,3,4,5]

*Main> :t tail

tail :: [a] -> [a]

The last function returns the last element of a list:

*Main> last a

5

*Main> :t last

last :: [a] -> a

The init function returns everything except the last element of a list:

*Main> init a

[1,2,3,4]

*Main> :t init

init :: [a] -> [a]

The length function returns the length of a list:

*Main> length a

5

*Main> :t length

length :: [a] -> Int

The take function picks the first ‘n' elements from a list:

*Main> take 3 a

[1,2,3]

*Main> :t take

take :: Int -> [a] -> [a]

The drop function drops ‘n' elements from the beginning of a list, and returns the rest:

*Main> drop 3 a

[4,5]

*Main> :t drop

drop :: Int -> [a] -> [a]

The zip function takes two lists and creates a new list of tuples with the respective pairs from each list. For example:

*Main> let b = ["one", "two", "three", "four", "five"]

48 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 49

Page 50: Open Source to You - August 2014

Developers Let's Try

*Main> zip a b

[(1,"one"),(2,"two"),(3,"three"),(4,"four"),(5,"five")]

*Main> :t zip

zip :: [a] -> [b] -> [(a, b)]

The let expression defines the value of ‘b' in the GHCi prompt. You can also define it in a way that’s similar to the definition of the list ‘a' in the source file.

The lines function takes input text and splits it at new lines:

*Main> let sentence = "First\nSecond\nThird\nFourth\nFifth"

*Main> lines sentence

["First","Second","Third","Fourth","Fifth"]

*Main> :t lines

lines :: String -> [String]

The words function takes input text and splits it on white space:

*Main> words "hello world"

["hello","world"]

*Main> :t words

words :: String -> [String]

The map function takes a function and a list, and applies the function to every element in the list:

*Main> map sumTwoInt a

[3,4,5,6,7]

*Main> :t map

map :: (a -> b) -> [a] -> [b]

The first argument to map is a function that is enclosed within parenthesis in the type signature (a -> b). This function takes an input of type ‘a' and returns an element of type ‘b'. Thus, when operating over a list [a], it returns a list of type [b].

Recursion provides a means of looping in functional programming languages. The factorial of a number, for example, can be computed in Haskell, using the following code:

factorial :: Int -> Int

factorial 0 = 1

factorial n = n * factorial (n-1)

The definition of factorial with different input use cases is called pattern matching on the function. On running the above example with GHCi, you get the following output:

*Main> factorial 0

1

*Main> factorial 1

1

*Main> factorial 2

2

*Main> factorial 3

6

*Main> factorial 4

24

*Main> factorial 5

120

Functions operating on lists can also be called recursively. To compute the sum of a list of integers, you can write the sumList function as:

sumList :: [Int] -> Int

sumList [] = 0

sumList (x:xs) = x + sumList xs

The notation (x:xs) represents a list, where ‘x' is the first element in the list and ‘xs' is the rest of the list. On running sumList with GHCi, you get the following:

*Main> sumList []

0

*Main> sumList [1,2,3]

6

Sometimes, you will need a temporary function for a computation, which you will not need to use elsewhere. You can then write an anonymous function. A function to increment an input value can be defined as:

*Main> (\x -> x + 1) 3

4

These are called Lambda functions, and the '\' represents the notation for the symbol Lambda. Another example is given below:

*Main> map (\x -> x * x) [1, 2, 3, 4, 5]

[1,4,9,16,25]

It is a good practice to write the type signature of the function first when composing programs, and then write the body of the function. Haskell is a functional programming language and understanding the use of functions is very important.

By: Shakthi Kannan

The author is a free software enthusiast and blogs at shakthimaan.com

50 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | PB

Page 51: Open Source to You - August 2014

Q_INVOKABLE This is a macro that is similar to Slot, except that it has a return type. Thus, we will prefix Q_INVOKABLE to the methods that can be called by the JavaScript. The advantage here is that we can have a return type with Q_INVOKABLE, as compared to Slot.

Developing a sample HTML page with JavaScript intelligenceHere is a sample form in HTML-JavaScript that will allow us to multiply any two given numbers. However, the logic of multiplication should reside in the C++ method only.

<html>

<head>

<script>

function Multiply()

{

/** MultOfNumbers a C++ Invokable method **/

var result = myoperations.MultOfNumbers(document.forms["DEMO_

FORM"]["Multiplicant_A"].value, document.forms["DEMO_FORM"]

["Multiplicant_B"].value);

document.getElementById("answer").value = result;

}

</script>

</head>

<body>

<form name="DEMO_FORM">

Multiplicant A: <input type="number"

name="Multiplicant_A"><br>

Multiplicant B: <input type="number"

This article is for Qt developers. It is assumed that the intended audience is aware of the famous Signals and Slots mechanisms of Qt. Creating an HTML page is

very quick compared to any other way of designing a GUI. An HTML page is nothing but a fancy page that doesn’t have any logic in its build. With the amalgamation of JavaScript, however, the HTML page builds in some intelligence. As everything cannot be collated in JavaScript, we need a back-end for it. Qt provides a way to mingle (HTML+Java) with C++. Thus, you can call the C++ methods through JavaScripts and vice-versa. This is possible by using the Qt-WebKit framework. The applications developed in Qt are not just limited to various desktop platforms. They are even ported over several mobile platforms. Thus, you can design your apps that can just fit into the Windows, iOS and Android worlds, seamlessly.

What is Qt-WebKit?In simple words, Qt-WebKit is the Web-browsing module of Qt. It can be used to display live content from the Internet as well as local HTML files.

Programming paradigm In Qt-WebKit, the base class is known as QWebView. The sub-class of QWebView is QWebViewPage, and a further sub-class is QWebFrame. This is useful while adding the desired class object to the JavaScript window object. In short, this class object will be visible to JavaScript once it is added to the JavaScript window object. However, JavaScript can invoke only the public Q_INVOKABLE methods. The Q_INVOKABLE restriction was introduced to make the applications being developed using Qt even more secure.

Qt-WebKit, a major engine that can render Web pages and execute JavaScript code, is the answer to the developer’s prayer. Let’s take a look at a few examples that will aid developers in making better use of this engine.

How To Developers

PB | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 51

Page 52: Open Source to You - August 2014

Developers How To

name="Multiplicant_B"><br>

Result: <input type="number" id="answer"

name="Multiplicant_C"><br>

<input type="button" value="Multiplication_compute_on_C++"

onclick="Multiply()">

</form>

</body>

</html>

Please note that in the above HTML code, myoperations is a class object. And MultOfNumbers is its public Q_INVOKABLE class method.

How to call the C++ methods from the Web page using the Qt-WebKit frameworkLet's say, I have the following class that has the Q_INVOKABLE method, MultOfNumbers.

class MyJavaScriptOperations : public QObject {

Q_OBJECT

public:

Q_INVOKABLE qint32 MultOfNumbers(int a, int b) {

qDebug() << a * b;

return (a*b);

}

};

This class object should be added to the JavaScript window object by the following API:

addToJavaScriptWindowObject("name of the object", new (class

that can be accessed))

Here is the entire program:

#include <QtGui/QApplication>

#include <QApplication>

#include <QDebug>

#include <QWebFrame>

#include <QWebPage>

#include <QWebView>

class MyJavaScriptOperations : public QObject {

Q_OBJECT

public:

Q_INVOKABLE qint32 MultOfNumbers(int a, int b) {

qDebug() << a * b;

return (a*b);

}

};

int main(int argc, char *argv[])

{

QApplication a(argc, argv);

QWebView *view = new QWebView();

view->resize(400, 500);

view->page()->mainFrame()->addToJavaScriptWindowObject("myope

rations", new MyJavaScriptOperations);

view->load(QUrl("./index.html"));

view->show();

return a.exec();

}

#include "main.moc"

The output is given in Figure 1.

How to install a callback from C++ code to the Web page using the Qt-WebKit frameworkWe have already seen the call to C++ methods by JavaScript. Now, how about a callback from C++ to JavaScript? Yes, it is possible with the Qt-WebKit. There are two ways to do so. However, for the sake of neatness in design, let’s discuss only the Signals and Slots mechanisms for the JavaScript callback.

Installing Signals and Slots for the JavaScript functionHere are the steps that need to be taken for the callback to be installed:a) Add a JavaScript window object to the

javaScriptWindowObjectCleared slot.b) Declare a signal in the class.c) Emit the signal.d) In JavaScript, connect the signal to the JavaScript

function slot. Here is the syntax to help you connect:

<JavaScript_window_object>.<signal_name>.connect(<JavaScript

function name>);

Note, you can make a callback to JavaScript only after the Web page is loaded. This can be ensured by connecting to the Slot emitted by the Signal loadFinished() in the C++ application.

Let’s look at a real example now. This will fire a callback once the Web page is loaded.

The callback should be addressed by the JavaScript function, which will show up an alert window.

Figure 1: QT DEMO output

52 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 53

Page 53: Open Source to You - August 2014

DevelopersHow To

By: Shreyas Joshi

The author is a technology enthusiast and software developer at Pace Micro Technology. You can connect with him at [email protected].

Figure 2: QT DEMO callback output

<html>

<head>

<script>

function alert_click()

{

alert("you clicked");

}

function JavaScript_function()

{

alert("Hello");

}

myoperations.alert_script_signal.connect(JavaScript_

function);

</script>

</head>

<body>

<form name="myform">

<input type="button" value="Hit me" onclick="alert_click()">

</form>

</body>

</html>

Here is the main file:

#include <QtGui/QApplication>

#include <QApplication>

#include <QDebug>

#include <QWebFrame>

#include <QWebPage>

#include <QWebView>

class MyJavaScriptOperations : public QObject {

Q_OBJECT

public:

QWebView *view;

MyJavaScriptOperations();

signals:

void alert_script_signal();

public slots:

void JS_ADDED();

void loadFinished(bool);

};

void MyJavaScriptOperations::JS_ADDED()

{

qDebug()<<__PRETTY_FUNCTION__;

view->page()->mainFrame()->addToJavaScriptWindowObject("myope

rations", this);

}

void MyJavaScriptOperations::loadFinished(bool oper)

{

qDebug()<<__PRETTY_FUNCTION__<< oper;

emit alert_script_signal();

}

MyJavaScriptOperations::MyJavaScriptOperations()

{

qDebug()<<__PRETTY_FUNCTION__;

view = new QWebView();

view->resize(400, 500);

connect(view->page()->mainFrame(), SIGNAL(javaScriptWindowObj

ectCleared()), this, SLOT(JS_ADDED()));

connect(view, SIGNAL(loadFinished(bool)), this,

SLOT(loadFinished(bool)));

view->load(QUrl("./index.html"));

view->show();

}

int main(int argc, char *argv[])

{

QApplication a(argc, argv);

MyJavaScriptOperations *jvs = new MyJavaScriptOperations;

return a.exec();

}

#include "main.moc"

The output is shown in Figure 2.Qt is a rich framework for C++ developers. It not only

provides these amazing features, but also has some interesting attributes like in-built SQlite, D-Bus and various containers. It's easy to develop an entire GUI application with it. You can even port an existing HTML page to Qt. This makes Qt a wonderful choice to develop a cross-platform application quickly. It is now getting popular in the mobile world too.

QT_DEMO

Hit Me

Hello

JavaScript Alert -

OK

52 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 53

Page 54: Open Source to You - August 2014

encourages the Linux community. Its goals are: � To develop custom Linux-based embedded systems

regardless of the architecture. � To provide interoperability between tools and working code,

which will reduce the money and time spent on the project. � To develop licence-aware build systems that make it

possible to include or remove software components based on specific licence groups and the corresponding restriction levels.

� To provide a place for open source projects that help in the development of Linux-based embedded systems and customisable Linux platforms.

� To focus on creating single build systems that address the needs of all users that other software components can later be tethered to.

� To ensure that the tools developed are architecturally independent.

� To provide a better graphical user interface to the build system, which eases access.

� To provide resources and information, catering to both new and experienced users.

� To provide core system component recipes provided by the OpenEmbedded project.

� To further educate the community about the benefits of this standardisation and collaboration in the Linux community and in the industry.

The Yocto Project communityThe community shares many common traits with a typical open source organisation. Anyone who is interested can contribute to the development of the project. The Yocto Project is developed and governed as a collaborative effort by an open community of professionals, volunteers and contributors.

The project’s governance is mainly divided into two wings

The Yocto Project helps developers and companies get their project off the ground. It is an open source collaboration project that provides templates, tools and

methods to create custom Linux-based systems for embedded products, regardless of the hardware architecture.

While building Linux-based embedded products, it is important to have full control over the software running on the embedded device. This doesn’t happen when you are using a normal Linux OS for your device. The software should have full access as per the hardware requirements. That’s where the Yocto Project comes in handy. It helps you create custom Linux-based systems for any hardware architecture and makes the device easier to use and faster than expected.

The Yocto Project was founded in 2010 as a solution for embedded Linux development by many open source vendors, hardware manufacturers and electronic companies. The project aims at helping developers build their own Linux distributions, specific to their own environments. The project provides developers with interoperable tools, methods and processes that help in the development of Linux-based embedded systems. The central goal of the project is to enable the user to reuse and customise tools and working code. It encourages interaction with embedded projects and has been a steady contributor to the OpenEmbedded core, BitBake, the Linux kernel development process and several other projects. It not only deals with building Linux-based embedded systems, but also the tool chain for cross compilation and software development kits (SDK) so that users can choose the package manager format they intend to use.

The goals of the Yocto ProjectAlthough the main aim is to help developers of customised Linux systems supporting various hardware architectures, it has also a key role in several other fields where it supports and

This article focuses on Yocto – a complete embedded Linux development environment that offers tools, metadata and documentation.

Developers Overview

54 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 55

Page 55: Open Source to You - August 2014

DevelopersOverview

—administrative and technical. The administrative board includes executive leaders from organisations that participate on the advisory board and also in several sub-groups that perform several non-technical services including community management, financial management, infrastructure management, advocacy and outreach. The technical board includes several sub-groups, which oversee tasks that range from submitting patches to the project architect to deciding on who is the final authority on the project.

The building of the project requires the coordinated efforts of many people, who work in several roles. These roles are listed below. � Architect: One who holds the final authority and provides

overall leadership to the project’s development. � Sub-system maintainers: The project is further divided

into several sub-projects and the maintainers are assigned to these sub-projects.

� Layer maintainers: Those who ensure the components’ excellence and functionality.

� Technical leaders: Those who work within the sub-projects, doing the same thing as the layer maintainers.

� Upstream projects: Many Yocto Project components such as the Linux kernel are dependent on the upstream projects.

� Advisory board: The advisory board gives direction to the project and helps in setting the requirements for the project.

LayersThe build system is composed of different layers, which are the containers for the building blocks used to construct the system. The layers are grouped according to functionality, which makes the management of extensions and customisations easier.

Figure1: YP community

Figure 2: YP layers

Developer-Specific Layer

CommercialLayer

UI-Specific LayerHardware-Specific BSP

Yocto-SpecificLayer Metadata

OpenEmbeddedCore Metadata

[1] https://www.yoctoproject.org/[2] https://wiki.yoctoproject.org/wiki/Main_Page

References

By: Vishnu N K

The author, an open source enthusiast, is in the midst of his B. Tech degree in Computer Science at Amrita Vishwa Vidyapeetham and contributes to Mediawiki. Contact him at [email protected]

Latest updates

Yocto Project 1.6The latest release of Yocto Project (YP) 1.6 ‘Daisy’ has a great set of features to help developers build with a very good user interface. The Toaster, a new UI to the YP build system, enables detailed examination of the build output, with great control over the view of the data. The Linux kernel update and the GCC update to 4.8.2 adds further functionality to the latest release. It also supports building Python 3. The new client for reporting errors to a central Web interface helps developers to focus on problem management.

AMD and LG Electronics partner with YoctoThe introduction of new standardised features to ensure quick access to the latest Board Support Packages (BSP) for AMD 64-bit x86 architecture has made AMD a new gold member in the YP community.LG Electronics, joining as a new member organisation to help support and guide the project, is of great impor-tance.

Embedded Linux Conference 2014The Yocto Project is one of the silver sponsors of this pre-mier vendor-neutral technical conference for companies and developers that use Linux in embedded products. Sponsored by the Linux Foundation, it has a key role in encouraging newcomers to the world of open source and embedded products.

Toaster prototypeToaster, a part of the latest YP 1.6 release, is a Web inter-face for BitBake, build system. Toaster collects all kinds of data about the building process, so that it is easy to search and query through this data in a specific way.

54 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 55

Page 56: Open Source to You - August 2014

Developers How To

What is Linux Kernel Porting?

target. There may be a need to change a few lines here and there, before it is up and running. But, the key thing to know is, what needs to be changed and where.

What Linux kernel porting involvesLinux kernel porting involves two things at a higher level: architecture porting and board porting. Architecture, in Linux terminology, refers to CPU. So, architecture porting means adapting the Linux kernel to the target CPU, which may be ARM, Power PC, MIPS, and so on. In addition to this, SOC porting can also be considered as part of architecture porting. As far as the Linux kernel is concerned, most of the times, you don't need to port it for architecture as this would already be supported in Linux. However, you still need to port Linux for the board and this is where the major focus lies. Architecture porting entails porting of initial start-up code, interrupt service routines, dispatcher routine, timer routine, memory management, and so on.

W ith the evolution of embedded systems, porting has become extremely important. Whenever you have new hardware at hand, the first and

the most critical thing to be done is porting. For hobbyists, what has made this even more interesting is the open source nature of the Linux kernel. So, let’s dive into porting and understand the nitty-gritty of it.

Porting means making something work on an environment it is not designed for. Embedded Linux porting means making Linux work on an embedded platform, for which it was not designed. Porting is a broader term and when I say ‘embedded Linux porting’, it not only involves Linux kernel porting, but also porting a first stage bootloader, a second stage bootloader and, last but not the least, the applications. Porting differs from development. Usually, porting doesn't involve as much of coding as in development. This means that there is already some code available and it only needs to be fine-tuned to the desired

One of the aspects of hacking a Linux kernel is to port it. While this might sound difficult, it won’t be once you read this article. The author explains porting techniques in a simplified manner.

56 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 57

Page 57: Open Source to You - August 2014

DevelopersHow To

Whereas board porting involves writing custom drivers and initialisation code for devices specific to the board.

Building a Linux kernel for the target platformKernel building is a two-step process: first, the kernel needs to be configured for the target platform. There are many ways to configure the kernel, based on the preferred configuration interface. Given below are some of the common methods.

To run the text-based configuration, execute the following command:

$ make config

This will show the configuration options on the console as seen in Figure 1. It is a little cumbersome to configure the kernel with this, as it prompts every configuration option, in order, and doesn't allow the reversion of changes.

To run the menu-driven configuration, execute the following command:

$ make menuconfig

This will show the menu options for configuring the kernel, as seen in Figure 2. This requires the ncurses library to be installed on the system. This is the most popular interface used to configure the kernel.

To run the window-based configuration, execute the following command:

$ make xconfig

This allows configuration using the mouse. It requires QT to be installed on the system.

For details on other options, execute the following command in the kernel top directory:

$ make help

Once the kernel is configured, the next step is to build the kernel with the make command. A few commonly used commands are given below:

$ make vmlinux - Builds the bare kernel

$ make modules - Builds the modules

$ make modules_prepare – Sets up the kernel for building the

modules external to kernel.

If the above commands are executed as stated, the kernel will be configured and compiled for the host system, which is generally the x86 platform. But, for porting, the intention is to configure and build the kernel for the target platform, which in turn, requires configuration of makefile. Two things that need to be changed in the makefile are given below:

ARCH=<architecture>

CROSS-COMPILE = <toolchain prefix>

The first line defines the architecture the kernel needs to be built for, and the second line defines the cross compilation toolchain prefix. So, if the architecture is ARM and the toolchain is say, from CodeSourcery, then it would be:

ARCH=arm

CROSS_COMPILE=arm-none-linux-gnueabi-

Optionally, make can be invoked as shown below:

$ make ARCH=arm menuconfig - For configuring the kernel

$ make ARCH=arm CROSS_COMPILE=arm-none-linux-gnueabi- - For

compiling the kernel

The kernel image generated after the compilation is usually vmlinux, which is in ELF format. This image can't be used directly with embedded system bootloaders such as u-boot. So convert it into the format suitable for a second stage bootloader. Conversion is a two-step process and is done with the following commands:

arm-none-linux-gnueabi-objcopy -O binary vmlinux vmlinux.bin

mkimage -A arm -O linux -T kernel -C none -a 0x80008000 -e

0x80008000 -n linux-3.2.8 -d vmlinux.bin uImage

-A ==> set architecture

-O ==> set operating system

Figure 1: Plain text-based kernel configuration

Figure 2: Menu-driven kernel configuration

56 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 57

Page 58: Open Source to You - August 2014

Developers How To

-T ==> set image type

-C ==> set compression type

-a ==> set load address (hex)

-e ==> set entry point (hex)

-n ==> set image name

-d ==> use image data from file

The first command converts the ELF into a raw binary. This binary is then passed to mkimage, which is a utility to generate the u-boot specific kernel image. mkimage is the utility provided by u-boot. The generated kernel image is named uImage.

The Linux kernel build systemOne of the beautiful things about the Linux kernel is that it is highly configurable and the same code base can be used for a variety of applications, ranging from high end servers to tiny embedded devices. And the infrastructure, which plays an important role in achieving this in an efficient manner, is the kernel build system, also known as kbuild. The kernel build system has two main components – makefile and Kconfig.

Makefile: Every sub-directory has its own makefile, which is used to compile the files in that directory and generate the object code out of that. The top level makefile percolates recursively into its sub-directories and invokes the corresponding makefile to build the modules and finally, the Linux kernel image. The makefile builds only the files for which the configuration option is enabled through the configuration tool.

Kconfig: As with the makefile, every sub-directory has a Kconfig file. Kconfig is in configuration language and Kconfig files located inside each sub-directory are the programs. Kconfig contains the entries, which are read by configuration targets such as make menuconfig to show a menu-like structure.

So we have covered makefile and Kconfig and at present they seem to be pretty much disconnected. For kbuild to work properly, there has to be some link between the Kconfig and makefile. And that link is nothing but the configuration symbols, which generally have a prefix CONFIG_. These symbols are generated by a configuration target such as menuconfig, based on entries defined in the Kconfig file. And based on what the user has selected in the menu, these symbols can have the values ‘y', ‘n', or ‘m'.

Now, as most of us are aware, Linux supports hot plugging of the drivers, which means, we can dynamically add and remove the drivers from the running kernel. The drivers which can be added/removed dynamically are known as modules. However, drivers that are part of the kernel image can't be removed dynamically. So, there are two ways to have a driver in the kernel. One is to build it as a part of the kernel, and the other is to build it separately as a module for hot-plugging. The value ‘y' for CONFIG_, means the corresponding driver will be part of the kernel image; the value ‘m' means it will be built as a module and value ‘n'

means it won't be built at all. Where are these values stored? There is a file called .config in the top level directory, which holds these values. So, the .config file is the output of the configuration target such as menuconfig.

Where are these symbols used? In makefile, as shown below:

obj-$(CONFIG_MYDRIVER) += my_driver.o

So, if CONFIG_MYDRIVER is set to value ‘y', the driver my_driver.c will be built as part of the kernel image and if set to value ‘m', it will be built as a module with the extension .ko. And, for value ‘n', it won't be compiled at all.

As you now know a little more about kbuild, let’s consider adding a simple character driver to the kernel tree.

The first step is to write a driver and place it at the correct location. I have a file named my_driver.c. Since it’s a character driver, I will prefer adding it at the drivers/char/ sub-directory. So copy this at the location drivers/char in the kernel.

The next step is to add a configuration entry in the drivers/char/Kconfig file. Each entry can be of type bool, tristate, int, string or hex. bool means that the configuration symbol can have the values ‘y' or ‘n', while tristate means it can have values ‘y', ‘m' or ‘n'. And ‘int', ‘string' and ‘hex' mean that the value can be an integer, string or hexadecimal, respectively. Given below is the segment of code added in drivers/char/Kconfig:

config MY_DRIVER

tristate "Demo for My Driver"

default m

help

Adding this small driver to kernel for

demonstrating the kbuild

The first line defines the configuration symbol. The second specifies the type for the symbol and the text which will be shown as the menu. The third specifies the default value for this symbol and the last two lines are for the help message. Another thing that you will generally find in a Kconfig file is ‘depends on'. This is very useful when you want to select the particular feature, only if its dependency is selected. For example, if we are writing a driver for i2c EEPROM, then the menu option for the driver should appear only if the 2c driver is selected. This can be achieved with the ‘depends on' entry.

After saving the above changes in Kconfig, execute the following command:

$ make menuconfig

Now, navigate to Device Drivers->Character devices and you will see an entry for My Driver.

By default, it is supposed to be built as a module. Once you are done with configuration, exit the menu and save the configuration. This saves the configuration in .config file. Now,

58 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 59

Page 59: Open Source to You - August 2014

DevelopersHow To

open the .config file, and there will be an entry as shown below:

CONFIG_MY_DRIVER=m

Here, the driver is configured to be built as a module. Also, one thing worth noting is that the symbol ‘MY_DRIVER' in Kconfig is prefixed with CONFIG_.

Now, just adding an entry in the Kconfig file and configuration alone won't compile the driver. There has to be the corresponding change in makefile as well. So, add the following line to makefile:

obj-$(CONFIG_MYDRIVER) += my_driver.o

After the kernel is compiled, the module my_driver.ko will be placed at drivers/char/. This module can be inserted in the kernel with the following command:

$ insmod my_driver.ko

Aren't these configuration symbols needed in the C code? Yes, or else how will the conditional compilation be taken care of? How are these symbols included in C code? During the kernel compilation, the Kconfig and .config files are read, and are used to generate the C header file named autoconf.h. This is placed at include/generated and contains the #defines for the configuration symbols. These symbols are used by the C code to conditionally compile the required code.

Now, let’s suppose I have configured the kernel and that it works fine with this configuration. And, if I make some new changes in the kernel configuration, the earlier ones will be overwritten. In order to avoid this from happening, we can save .config file in the arch/arm/configs directory with a name like my_config, for instance. And next time, we can execute the following command to configure the kernel with older options:

$ make my_config_defconfig

Linux Support Packages (LSP)/Board Support Packages (BSP)One of the most important and probably the most challenging thing in porting is the development of Board Support Packages (BSP). BSP development is a one-time effort during the product development lifecycle and, obviously, the most critical. As we have discussed, porting involves architecture porting and board porting. Board porting involves board-specific initialisation code that includes initialisation of the various interfaces such as memory, peripherals such as serial, and i2c, which in turn, involves the driver porting.

There are two categories of drivers. One is the standard device driver such as the i2c driver and block driver located at the standard directory location. Another is the custom interface or device driver, which includes the

board-specific custom code and needs to be specifically brought in with the kernel. And this collection of board-specific initialisation and custom code is referred to as a Board Support Package or, in Linux terminology, a LSP. In simple words, whatever software code you require (which is specific to the target platform) to boot up the target with the operating system can be called LSP.

Components of LSPAs the name itself suggests, BSP is dependent on the things that are specific to the target board. So, it consists of the code which is specific to that particular board, and it applies only to that board. The usual list includes Interrupt Request Numbers (IRQ), which are dependent on how the various devices are connected on the board. Also, some boards have an audio codec and you need to have a driver for that codec. Likewise, there would be switch interfaces, a matrix keypad, external eeprom, and so on.

LSP placementLSP is placed under a specific <arch> folder of the kernel's arch folder. For example, architecture-specific code for ARM resides in the arch/arm directory. This is about the code, but you also need the headers which are placed under arch/arm/include/asm. However, board-specific code is placed at arch/arm/mach-<board_name> and corresponding headers are placed at arch/arm/mach-<soc architecture>/include. For example, LSP for Beagle Board is placed at arch/arm/mach-omap2/board-omap3beagle.c and corresponding headers are placed at arch/arm/mach-omap2/include/mach/. This is shown in figure 4.

Machine IDEvery board in the kernel is identified by a machine ID. This helps the kernel maintainers to manage the boards based on ARM architecture in the source tree. This ID is passed to the kernel from the second stage bootloader such as u-boot. For the kernel to boot properly, there has to be a match between the kernel and the second stage boot loader. This information is available in arch/arm/tools/mach-types and is used to generate the file linux/include/generated/

Figure 3: Menu option for My Driver

58 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 59

Page 60: Open Source to You - August 2014

Developers How To

mach-types.h. The macros defined by mach-types.h are used by the rest of the kernel code. For example, the machine ID for Beagle Board is 1546, and this is the number which the second stage bootloader passes to the kernel. For registering the new board for ARM, provide the board details at http://www.arm.linux.org.uk/developer/machines/?action=new.

Note: The porting concepts described over here are specific to boards based on the ARM platform and may differ for other architectures.

MACHINE_START macroOne of the steps involved in kernel porting is to define the initialisation functions for the various interfaces on the board, such as serial, Ethernet, Gpio, etc. Once these functions are defined, they need to be linked with the kernel so that it can invoke them during boot-up. For this, the kernel provides the macro MACHINE_START. Typically, a MACHINE_START macro looks like what’s shown below:

MACHINE_START(MY_BOARD, "My Board for Demo")

.atag_offset = 0x100,

.init_early = my_board_early,

.init_irq = my_board_irq,

.init_machine = my_board_init,

MACHINE_END

Let's understand this macro. MY_BOARD is machine ID defined in arch/arm/tools/mach-types. The second parameter to the macro is a string describing the board. The next few lines specify the various initialisation functions, which the kernel has to invoke during boot-up. These include the following:

.atag_offset: Defines the offset in RAM, where the boot parameters will be placed. These parameters are passed from the second stage bootloader, such as u-boot.

my_board_early: Calls the SOC initialisation functions. This function will be defined by the SOC vendor, if the kernel is ported for it.

my_board_irq: Intialisation related to interrupts is done over here.

my_board_init: All the board-specific initialisation is

done here. This function should be defined during the board porting. This includes things such as setting up the pin multiplexing, initialisation of the serial console, initialisation of RAM, initialisation of Ethernet, USB and so on.

MACHINE_END ends the macro. This macro is defined in arch/arm/include/asm/mach/arch.h.

How to begin with portingThe most common and recommended way to begin with porting is to start with some reference board, which closely resembles yours. So, if you are porting for a board based on OMAP3 architecture, take Beagle Board as a reference. Also, for porting, you should understand the system very well. Depending on the features available on your board, configure the kernel accordingly. To start with, just enable the minimal set of features required to boot the kernel. This may include but not be limited to initialisation of RAM, Gpio subsystems, serial interfaces, and filesystems drivers for mounting the root filesystem. Once the kernel boots up with the minimal configuration, start adding the new features, as required.

So, let’s summarise the steps involved in porting:1. The first step is to register the machine with the kernel

maintainer and get the unique ID for your board. While this is not necessary to begin with porting, it needs to be done eventually, if patches are to be submitted to the mainline. Place the machine ID in arch/arm/tools/mach-types.

2. Create the board-specific file ‘board-<board_name>' at arch/arm/mach-<soc> and define the MACHINE_START for the new board. For example, the board-specific file for the Panda Board resides at arch/arm/mach-omap2/board-omap4panda.c.

3. Update the Kconfig file at arch/arm/mach_<soc> to add an entry for the new board as shown below:

config MACH_MY_BOARD

bool “My Board for Demo”

depends on ARCH_OMAP3

default y

4. Update the corresponding makefile, so that the board-specific file gets compiled. This is shown below:

obj-$(CONFIG_MACH_MY_BOARD) += board-my_board.o

5. Create a default configuration file for the new board. To begin with, take any .config file as a starting point and customise it for the new board. Place the working .config file at arch/arm/configs/my_board_defconfig.

By: Pradeep Tewani

The author works at Intel, Bangalore. He shares his learnings on Linux & embedded systems through his weekend workshops. Learn more about his experiments at http://sysplay.in. He can be reached at [email protected].

Figure 4: LSP placement in kernel source

60 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | PB

Page 61: Open Source to You - August 2014

DevelopersHow To

Writing an RTC Driver Based on the SPI Bus

this case, the slave device is RTC DS1347. Describing the SPI slave device is an independent task that can be done as discussed in the section on ‘Registering RTC DS1347 as an SPI slave device’.

The SPI protocol driver: This interface provides methods to read and write the SPI slave device (RTC DS1347). Writing an SPI protocol driver is described in the section on ‘Registering the DS1347 SPI protocol driver’.

The steps for writing an RTC DS1347 driver based on the SPI bus are as follows:1. Register RTC DS1347 as an SPI slave device with the SPI

master driver, based on the SPI bus number to which the SPI slave device is connected.

2. Register the RTC DS1347 SPI protocol driver.3. Once the probe routine of the protocol driver is called,

register the RTC DS1347 protocol driver to read and write routines to the Linux RTC subsystem.

We will focus on the RTC DS1347 to explain how device drivers are written for RTC chips. You can refer to the RTC DS1347 datasheet for a complete

understanding of this driver.

Linux SPI subsystem In Linux, the SPI subsystem is designed in such a way that the system running Linux is always an SPI master. The SPI subsystem has three parts, which are listed below.

The SPI master driver: For each SPI bus in the system, there will be an SPI master driver in the kernel, which has routines to read and write on that SPI bus. Each SPI master driver in the kernel is identified by an SPI bus number. For the purposes of this article, let’s assume that the SPI master driver is already present in the system.

The SPI slave device: This interface provides a way of describing the SPI slave device connected to the system. In

Most computers have one or more hardware clocks that display the current time. These are ‘Real Time Clocks’ or RTCs. Battery backup is provided for one of these clocks so that time is tracked even when the computer is switched off. RTCs can be used for alarms and other functions like switching computers on or off. This article explains how to write Linux device drivers for SPI-based RTC chips.

PB | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 61

Page 62: Open Source to You - August 2014

Developers How To

After all this, the Linux RTC subsystem can use the registered protocol driver’s read and write routines to read and write the RTC.

RTC DS1347 hardware overviewRTC DS1347 is a low current SPI compatible real time clock. The information it provides includes the seconds, minutes and hours of the day, as well as what day, date, month and year it is. This information can either be read from or be written to the RTC DS1347 using the SPI interface. RTC DS1347 acts as a slave SPI device and the microcontroller connected to it acts as the SPI master device. The CS pin of the RTC is asserted ‘low’ by the microcontroller to initiate the transfer, and de-asserted ‘high’ to terminate the transfer. The DIN pin of the RTC transfers data from the microcontroller to the RTC and the DOUT pin transfers data from the RTC to the microcontroller. The SCLK pin is used to provide a clock by the microcontroller to synchronise the transfer between the microcontroller and the RTC.

The RTC DS1347 works in the SPI Mode 3. Any transfer between the microcontroller and the RTC requires the microcontroller to first send the command/address byte to the RTC. Data is then transferred out of the DOUT pin if it is a read operation; else, data is sent by the microcontroller to the DIN pin of the RTC if it is a write operation. If the MSB bit of the address is one, then it is a read operation; and if it is zero, then it is a write operation. All the clock information is mapped to SPI addresses as shown in Table 1.

Read address

Write address

RTC register

Range

0x81 0x01 Seconds 0 - 59

0x83 0x03 Minutes 0 - 59

0x85 0x05 Hours 0 - 23

0x87 0x07 Date 1 - 31

0x89 0x09 Month 1 - 12

0x8B 0x0B Day 1 - 7

0x8D 0x0D Year 0 - 99

0x8F 0x0F Control 00H - 81H

0x97 0x17 Status 03H - E7H

0xBF 0x3F Clock burst -

Table 1: RTC DS1347 SPI register map

When the clock burst command is given to the RTC, the latter will give out the values of seconds, minutes, hours, the date, month, day and year, one by one, and continuously. The clock burst command is used in the driver to read the RTC.

The Linux RTC subsystemThe Linux RTC subsystem is the interface through which Linux manages the time of the system. The following procedure is what the driver goes through to register the RTC with the Linux RTC subsystem.

1. Specify the driver’s RTC read and write routines through the function pointer interface provided by the RTC subsystem.

2. Register with the RTC subsystem using devm_rtc_device_register API.The RTC subsystem requires that the driver fill the struct

rtc_class_ops routine, which has the following function pointers.

read_time: This routine is called by the kernel when the user application executes a system call to read the RTC time.

set_time: This routine is called by the kernel when the user application executes a system call to set the RTC time.

There are other function pointers in the structure, but the above two are the minimum an interface requires for an RTC driver.

Whenever the kernel wants to perform any operation on the RTC, it calls the above function pointer, which will call the driver’s RTC routines.

After the above RTC operations structure has been filled, it has to be registered with the Linux RTC subsystem. This is done through the kernel API:

devm_rtc_device_register(struct device *dev, const char

*name, const struct rtc_class_ops *ops, struct module

*owner);

The first parameter is the device object, the second is the name of the RTC driver, the third is the driver RTC operations structure that has been discussed above, and the last is the owner, which is THIS_MODULE macro.

Registering the RTC DS1347 as an SPI slave deviceThe Linux kernel requires a description of all devices connected to it. Each subsystem in the Linux driver model has a way of describing the devices related to that subsystem. Similarly, the SPI subsystem represents devices based on the SPI bus as a struct spi_device. This structure defines the SPI slave device connected to the processor running the Linux kernel. The device structure is written in the board file in the

Figure 1: RTC DS1347 driver block diagram

CPU

Linux Kernel SPI Masterdriver SPI

BUSRCT

DS1347

RTC Subsystem

SPI Subsystem

Slave deviceregistration

SPI Slave devicestruct spi_board_info

SPI Protocol Driverstruct spi_driver

struct spi_device

spi masterwrite

spi masterread

spi readoperation

spi write operation

RTCwrite

RTCread

62 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 63

Page 63: Open Source to You - August 2014

DevelopersHow To

Linux kernel, which is a part of the board support package. The board file resides in arch/ directory in Linux (for example, the board file for the Beagle board is in arch/arm/mach-omap2/board-omap3beagle.c). The struct spi_device is not directly written but a different structure called struct spi_board_info is filled and registered, which creates the struct spi_device in the kernel automatically and links it to the SPI master driver that contains the routines to read and write on the SPI bus. The struct spi_board_info for RTC DS1347 can be written in the board file as follows:

struct spi_board_info spi_board_info[] __initdata = {

.modalias = “ds1347”,

.bus_num = 1,

.chip_select = 1,

};

Modalias is the name of the driver used to identify the driver that is related to this SPI slave device—in which case the driver will have the same name. Bus_num is the number of the SPI bus. It is used to identify the SPI master driver that controls the bus to which this SPI slave device is connected. Chip_select is used in case the SPI bus has multiple chip select pins; then this number is used to identify the chip select pin to which this SPI slave device is connected.

The next step is to register the struct spi_board_info with the Linux kernel. In the board file initialisation code, the structure is registered as follows:

spi_register_board_info(spi_board_info, 1);

The first parameter is the array of the struct spi_board_info and the second parameter is the number of elements in the array. In the case of RTC DS1347, it is one. This API will check if the bus number specified in the spi_board_info structure matches with any of the master driver bus numbers that are registered with the Linux kernel. If any of them do match, it will create the struct spi_device and initialise the fields of the spi_device structure as follows:

master = spi_master driver which has the same bus number as

bus_num in the spi_board_info structure.

chip_select = chip_select of spi_board_info

modalias = modalias of spi_board_info

After initialising the above fields, the structure is registered with the Linux SPI subsystem. The following are the fields of the struct spi_device, which will be initialised by the SPI protocol driver as needed by the driver, and if not needed, will be left empty.

max_speed_hz = the maximum rate of transfer to the bus.bits_per_word = the number of bits per transfer.mode = the mode in which the SPI device works.

In the above specified manner, any SPI slave device is registered with the Linux kernel and the struct spi_device is created and linked to the Linux SPI subsystem to describe the device. This spi_device struct will be passed as a parameter to the SPI protocol driver probe routine when the SPI protocol driver is loaded.

Registering the RTC DS1347 SPI protocol driverThe driver is the medium through which the kernel interacts with the device connected to the system. In case of the SPI device, it is called the SPI protocol driver. The first step in writing an SPI protocol driver is to fill the struct spi_driver structure. For RTC DS1347, the structure is filled as follows:

static struct spi_driver ds1347_driver = {

.driver = {

.name = "ds1347",

.owner = THIS_MODULE,

},

.probe = ds1347_probe,

};

The name field has the name of the driver (this should be the same as in the modalias field of the struct spi_board_info). ‘Owner’ is the module that owns the driver, THIS_MODULE is the macro that refers to the current module in which the driver is written (the ‘owner’ field is used for reference counting of the module owning the driver). The probe is the most important routine that is called when the device and the driver are both registered with the kernel.

The next step is to register the driver with the kernel. This is done by a macro module_spi_driver (struct spi_driver *). In the case of RTC DS1347, the registration is done as follows:

module_spi_driver(ds1347_driver);

The probe routine of the driver is called if any of the following cases are satisfied:1. If the device is already registered with the kernel and then

the driver is registered with the kernel.2. If the driver is registered first, then when the device is

registered with the kernel, the probe routine is called.In the probe routine, we need to read and write on the SPI

bus, for which certain common steps need to be followed. These steps are written in a generic routine, which is called throughout to avoid duplicating steps. The generic routines are written as follows:1. First, the address of the SPI slave device is written on

the SPI bus. In the case of the RTC DS1347, the address should contain its most significant bit, reset for the write operation (as per the DS1347 datasheet).

2. Then the data is written to the SPI bus.Since this is a common operation, a separate routine ds1347_

62 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 63

Page 64: Open Source to You - August 2014

Developers How To

write_reg is written as follows:

static int ds1347_write_reg(struct device *dev, unsigned char

address, unsigned char data)

{

struct spi_device *spi = to_spi_device(dev);

unsigned char buf[2];

buf[0] = address & 0x7F;

buf[1] = data;

return spi_write_then_read(spi, buf, 2, NULL, 0);

}

The parameters to the routine are the address to which the data has to be written and the data which has to be written to the device. spi_write_then_read is the routine that has the following parameters:

struct spi_device: The slave device to be written.tx_buf: Transmission buffer. This can be NULL if

reception only.tx_no_bytes: The number of bytes in the tx buffer.rx_buf: Receive buffer. This can be NULL if

transmission only.rx_no_bytes: The number of bytes in the receive buffer.In the case of the RTC DS1347 write routine, only two

bytes are to be written: one is the address and the other is the data on that address.

The reading of the SPI bus is done as follows:1. First, the address of the SPI slave device is written on

the SPI bus. In the case of RTC DS1347, the address should contain its most significant bit set for the read operation (as per the DS1347 datasheet).

2. Then the data is read from the SPI bus.Since this is a common operation, a separate routine,

ds1347_read_reg, is written as follows:

static int ds1347_read_reg(struct device *dev, unsigned char

address, unsigned char *data)

{

struct spi_device *spi = to_spi_device(dev);

*data = address | 0x80;

return spi_write_then_read(spi, data, 1, data, 1);

}

In the case of RTC DS1347, only one byte, which is the address, is written on the SPI bus and one byte is to be read from the SPI device.

RTC DS1347 driver probe routineWhen the probe routine is called, it passes an spi_device struct, which was created when spi_board_info was

registered. The first thing the probe routine does is to set the SPI parameters to be used to write on the bus. The parameters are the mode in which the SPI device works. In the case of RTC DS1347, it works on Mode 3 of the SPI:

spi->mode = SPI_MODE_3;

bits_per_word is the number of bits transferred. In the case of RTC DS1347, it is 8 bits.

spi->bits_per_word = 8;

After changing the parameters, the kernel has to be informed of the changes, which is done by calling the spi_setup routine as follows:

spi_setup(spi);

The following steps are carried out to check and configure the RTC DS1347.1. First, the RTC control register is read to see if the RTC is

present and if it responds to the read command.2. Then the write protection of the RTC is disabled so that

the code is able to write on the RTC registers.3. Then the oscillator of the RTC DS1347 is started so that

the RTC starts working.Till this point the kernel is informed that the RTC is on an SPI

bus and it is configured. After the RTC is ready to be read and written by the user, the read and write routines of the RTC are to be registered with the Linux kernel RTC subsystem as follows:

rtc = devm_rtc_device_register(&spi->dev, "ds1347", &ds1347_

rtc_ops, THIS_MODULE);

The parameters are the name of the RTC driver, the RTC operation structure that contains the read and write operations of the RTC, and the owner of the module. After this registration, the Linux kernel will be able to read and write on the RTC of the system. The RTC operation structure is filled as follows:

static const struct rtc_class_ops ds1347_rtc_ops = {

.read_time = ds1347_read_time,

.set_time = ds1347_set_time,

};

The RTC read routine is implemented as follows. The RTC read routine has two parameters, one is the

device object and the other is the pointer to the Linux RTC time structure struct, rtc_time.

The rtc_time structure has the following fields, which have to be filled by the driver:

tm_sec: seconds (0 to 59, same as RTC DS1347)tm_min: minutes (0 to 59, same as RTC DS1347)

64 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 65

Page 65: Open Source to You - August 2014

DevelopersHow To

tm_hour: hour (0 to 23, same as RTC DS1347)tm_mday: day of month (1 to 31, same as RTC DS1347)tm_mon: month (0 to 11 but RTC DS1347 provides

months from 1 to 12, so the value returned by RTC needs to have 1 subtracted from it)

tm_year: year (year since 1900; RTC DS1347 stores years from 0 to 99, and the driver considers the RTC valid from 2000 to 2099, so the value returned from RTC is added to 100 and as a result the offset is the year from 1900)

First the clock burst command is executed on the RTC, which gives out all the date and time registers through the SPI interface, i.e., a total of 8 bytes:

buf[0] = DS1347_CLOCK_BURST | 0x80;

err = spi_write_then_read(spi, buf, 1, buf, 8);

if (err)

return err;

Then the read date and time is stored in the Linux date and time structure of the RTC. The time in Linux is in binary format so the conversion is also done:

dt->tm_sec = bcd2bin(buf[0]);

dt->tm_min = bcd2bin(buf[1]);

dt->tm_hour = bcd2bin(buf[2] & 0x3F);

dt->tm_mday = bcd2bin(buf[3]);

dt->tm_mon = bcd2bin(buf[4]) - 1;

dt->tm_wday = bcd2bin(buf[5]) - 1;

dt->tm_year = bcd2bin(buf[6]) + 100;

After storing the date and time of the RTC in the Linux RTC date and time structure, the date and time is validated through rtc_valid_tm API. After validation, the validation status from the API is returned—if the date and time is valid, then the kernel will return the date and time in the structure to the user application; else it will return an error:

return rtc_valid_tm(dt);

The RTC write routine is implemented as follows.First, the local buffer is filled with the clock burst write

command, and the date and time passed to the driver write routine. The clock burst command informs the RTC that the date and time will follow this command, which is to be written to the RTC. Also, the time in RTC is in the bcd format; so the conversion is also done:

buf[0] = DS1347_CLOCK_BURST & 0x7F;

buf[1] = bin2bcd(dt->tm_sec);

buf[2] = bin2bcd(dt->tm_min);

buf[3] = (bin2bcd(dt->tm_hour) & 0x3F);

buf[4] = bin2bcd(dt->tm_mday);

buf[5] = bin2bcd(dt->tm_mon + 1);

buf[6] = bin2bcd(dt->tm_wday + 1);

/* year in linux is from 1900 i.e in range of 100

in rtc it is from 00 to 99 */

dt->tm_year = dt->tm_year % 100;

buf[7] = bin2bcd(dt->tm_year);

buf[8] = bin2bcd(0x00);

After this, the data is sent to the RTC device, and the status of the write is sent to the kernel as follows:

return spi_write_then_read(spi, buf, 9, NULL, 0);

Contributing to the RTC subsystemThe RTC DS1347 is a Maxim Dallas RTC. There are various other RTCs in the Maxim database and they are not supported by the Linux kernel, just like it is with various other manufacturers of RTCs. All the RTCs that are supported by the Linux kernel are present in the drivers/rtc directory of the kernel. The following steps can be taken to write support for the RTC in the Linux kernel.1. Pick any RTC from the ‘Manufacturer’ (e.g., Maxim)

database which does not have support in the Linux kernel (see the drivers/rtc directory for supported RTCs).

2. Download the datasheet of the RTC and study its features.3. Refer to rtc-ds1347.c and other RTC files in the drivers/

rtc directory in the Linux kernel and go over even this article for how to implement RTC drivers.

4. Write the support for the RTC.5. Use git (see ‘References’ below) to create a patch for the

RTC driver written.6. Submit the patch by mailing it to the Linux RTC mailing list:

[email protected][email protected][email protected]

7. The patch will be reviewed and any changes required will be suggested, and if everything is fine, the driver will be acknowledged and be added to the Linux tree.

By: Raghavendra Chandra Ganiga

The author is an embedded firmware development engineer at General Industrial Controls Pvt Ltd, Pune. His interests lie in microcontrollers, networking firmware, RTOS development and Linux device drivers.

[1] DS1347 datasheet, datasheets.maximintegrated.com/en/ds/DS1347.pdf

[2] DS1347 driver file https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/rtc/rtc-ds1347.c

[3] Writing and submitting your first Linux kernel patch video, https://www.youtube.com/watch?v=LLBrBBImJt4

[4] Writing and submitting your first Linux kernel patch text file and presentation, https://github.com/gregkh/kernel-tutorial

References

64 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 65

Page 66: Open Source to You - August 2014

communications between the software and hardware using IPC and system calls. It resides in the main memory (RAM), when any operating system is loaded in memory.

The kernel is mainly of two types - the micro kernel and the monolithic kernel. The Linux kernel is monolithic, as is depicted clearly in Figure 2.

Based on the above diagram, the kernel can be viewed as a resource manager; the managed resource could be a process, hardware, memory or storage devices. More details about the internals of the Linux kernel can be found at http://kernelnewbies.org/LinuxVersions and https://www.kernel.org/doc/Documentation/.

Linux kernel files and modulesIn Ubuntu, kernel files are stored under the /boot/ directory (run ls /boot/ from the command prompt). Inside this directory, the kernel file will look something like this:

‘vmlinuz-A.B.C-D’

… where A.B is 3.2, C is your version and D is a patch or fix.Let’s delve deeper into certain aspects depicted in Figure 3:

� Vmlinuz-3.2.0-29-generic: In vmlinuz, ‘z’ indicates the

GIT is a free, open source distributed version control tool. It is easy to learn and is also fast, as most of the operations are performed locally. It has a very

small footprint. Just to compare GIT with another SVN (Source Version Control) tool, refer to http://git-scm.com/about/small-and-fast.

GIT allows multiple local copies (branches), each totally different from the other—it allows the making of clones of the entire repository so each user will have a full backup of the main repository. Figure 1 gives one among the many pictorial representations of GIT. Developers can clone the main repository, maintain their own local copy (branch and branch1) and push the code changes (branch1) to the main repository. For more information on GIT, refer to http://git-scm.com/book.

Note: GIT is under development and hence changes are often pushed into GIT repositories. To get the latest GIT code, use the following command: $git clone git://git.kernel.org/pub/scm/git/git.git

The kernelThe kernel is the lowest level program that manages

This article is aimed at newbie developers who are planning to set up a development environment or move their Linux kernel development environment to GIT.

Use GIT for Linux Kernel Development

Developers Let's Try

66 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 67

Page 67: Open Source to You - August 2014

DevelopersLet's Try

‘compressed’ Linux kernel. With the development of virtual memory, the prefix vm was used to indicate that the kernel supports virtual memory.

� Initrd.img-3.2.0-29-generic: An initial ‘ramdisk’ for your kernel. � Config-3.2.0-29-generic: The ‘config’ file is used to

configure the kernel. We can configure, define options and determine which modules to load into the kernel image while compiling.

� System.map-3.2.0-29-generic: This is used for memory management before the kernel loads.

Kernel modulesThe interesting thing about kernel modules is that they can be loaded or unloaded at runtime. These modules typically add functionality to the kernel—file systems, devices and system calls. They are located under /lib/modules with the extension .ko.

Setting up a development environmentLet’s set up the host machine with Ubuntu 14.04. Building the Linux kernel requires a few tools like GIT, make, gcc and ctag/ncurser-dev. Run the following command:

Sudo apt-get install git-core gcc make libncurses5-dev

exuberant-ctags

Once GIT is installed on the local machine (I am using Ubuntu), open a command prompt and issue the following commands to create an account:

git config --global user.name “Vinay Patkar”

git config –global user. Email [email protected]

Let’s set up our own local repository for the Linux kernel.

Note: 1. Multiple Linux kernel repositories exist online. Here, we pull Linus Torvald’s Linux-2.6 GIT code -- Git clone http://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git

2. In case you are behind a proxy-server, set the proxy by running git config –global https.proxy https://domain\usernmae:password@proxy:port.

Now you can see a directory named linux-2.6 in the current directory. Do a GIT pull to update your repository:

Cd linux-2.6

Git pull

Note: Alternatively, you can clone the latest stable build as shown below:

git clone git://git.kernel.org/pub/scm/linux/kernel/git/

stable/linux-stable.git

cd linux-2.6

Next, find the latest stable kernel tag by running the following code:

git tag -l | less

git checkout -b stable v3.9

Note: RC is the release candidate, and it is a functional but not stable build.

Once you have the latest kernel code pulled, create your own local branch using GIT. Make some changes to the code and to commit changes to the code, run git commit –a.

Figure 2: Linux kernel architecture

Figure 3: Locating Ubuntu Linux kernel files

Figure 4: GIT pull

Figure 1: GIT

Public1 Public2

User Applications

KernelSystem Call

Physical/ virtualMemory

Management

ProcessManagement &

architecture

Hardware

MemoryCPUStorage & other

devices

Virtual File System& Device Driver

Main repository

IntegrationDeveloperDeveloper

66 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 67

Page 68: Open Source to You - August 2014

Developers Let's Try

your current config.There are multiple files that start with config; find the file that

is associated with your kernel by running uname –a. Then run:

cp /boot/config-`uname -r`* .config or

cp /boot/config-3.13.0-24-generic .config

Make defconfig <---- for default configuration

Or

Make nconfig <---- for minimal configuration, here we

can enable or disable features

At this point edit MakeFile as shown below

VERSION = 3

PATCHLEVEL = 9

SUBLEVEL = 0

EXTRAVERSION = -rc9 <-- there [edit this part]

NAME = Saber-toothed Squirrel

Now run:

Make

This will take some time and if everything goes well, install the newly built kernel by running the following command:

Sudo make modules_install

Sudo make install

At this point, you should have your own version of the kernel, so reboot the machine and log in as the super user (root) and check uname –a. It should list your own version of the Linux kernel (something like ‘Linux Kernel 3.9.0rc9’).

By: Vinay Patkar

The author works as a software development engineer at Dell India R&D Centre, Bengaluru, and has close to two years’ experience in automation and Windows Server OS. He is interested in virtualisation and cloud computing technologies.

[1] http://linux.yyz.us/git-howto.html[2] http://kernelnewbies.org/KernelBuild[3] https://www.kernel.org/doc/Documentation/[4] http://kernelnewbies.org/LinuxVersions

References

Figure 5: GIT checkout

Figure 7: Modules_install and Install

Figure 6: Make

Setting up the kernel configurationMany kernel drivers can be turned on or off, or be built on modules. The .config file in the kernel source directory determines which drivers are built. When you download the source tree, it doesn’t come with a .config file. You have several options for generating a .config file. The easiest is to duplicate

Know the Leading Playersin Every Sector of the Electronics Industry

Log on to www.electronicsb2b.com and be in touch with the Electronics B2B Fraternity 24x7

B2B INDUSTRY WITH AACCESS ELECTRONICS

www.electronicsb2b.com

68 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | PB

Page 69: Open Source to You - August 2014

AdminHow To

Managing Your IT Infrastructure Effectively with Zentyal

4. From the dashboard, select HTTP Proxy under the Gateway section. This will show different options like General settings, Access rules, Filter profiles, Categorized Lists and Bandwidth throttling.

5. Select General settings to configure some basic parameters.

6. Under General settings, select Transparent Proxy. This option is used to manage proxy settings without making clients aware about the proxy server.

7. Check Ad Blocking, which will block all the advertisements from the HTTP traffic.

8. Cache size defines the stored HTTP traffic storage area. Mention the size in MBs.

9. Click Change and then click Save changes.10. To filter the unwanted sites from the network, block

In previous articles in this series, we discussed various scenarios that included DHCP, DNS and setting up a captive portal. In this article, let’s discuss the

HTTP proxy, traffic shaping and setting up of ‘Users and Computers’ modules.

The HTTP proxy set-upWe will start with the set-up of the HTTP proxy module of Zentyal. This module will be used to filter out unwanted traffic from our network. The steps for the configuration are as follows:1. Open the Zentyal dashboard by using the domain name

set up in the previous article or use the IP address.2. The URL will be https://domain-name.3. Enter the user ID and password.

Zentyal (formerly eBox Platform) is a program for servers used by small and medium businesses (SMBs). It plays multiple roles—as a gateway, network infrastructure manager, unified threat manager, office server, unified communications server or a combination of all of the above. This is the third and last article in our series on Zentyal.

PB | july 2014 | OPEN SOuRCE FOR yOu | www.OpenSourceForu.com www.OpenSourceForu.com | OPEN SOuRCE FOR yOu | july 2014 | 69

Page 70: Open Source to You - August 2014

Admin How To

them using Filter profiles. Click Filter profiles under HTTP proxy.

11. Click Add new. 12. Enter the name of the profile. In our case, we used

Spam. Click Add and save changes.13. Click the button under Configuration. 14. To block all spam sites, let’s use the Threshold option.

The various options of Threshold will decide how to block the enlisted sites. Let’s select Very strict under Threshold and click Change. Then click Save changes to save the changes permanently.

15. Select Use antivirus to block all incoming files, which may be viruses. Click the Change and the Save changes buttons.

16. To add a site to be blocked by proxy, click Domain and URLs and under Domain and URL rules, click the Add new button.

17. You will then be asked for the domain name. Enter the domain name of the site which is to be blocked. Decision option will instruct the proxy to allow or deny the specified site. Then click Add and Save changes.

18. To activate the Spam profile, click Access rules under HTTP proxy.

19. Click Add new. Define the time period and the days when the profile is to be applied.

20. Select Any from Source dropdown menu and then select Apply filter profile from Decision dropdown menu. You will see a Spam profile.

21. Click Add and Save changes. With all the above steps, you will be able to either block

or allow sites, depending on what you want your clients to have access to. All the other settings can be experimented with, as per your requirements.

Bandwidth throttlingThe setting under HTTP proxy is used to add delay pools, so that a big file that users wish to download does not hamper the download speed of the other users.

To do this, follow the steps mentioned below:1. First create the network object on which you wish to

apply the rule. Click Network and select Objects under Network options.

Table 1

Based on the firewall

Service Source Destination Priority Guaranteed rate (KBps)

Limited rate (KBps)

Yes Any Any 2 512 0

Yes Any Any 1 512 0

Yes Any Any 3 1024 2048

Yes Any Any 3 1024 2048

Yes Any Any 7 0 10

2. Click Add new to add the network object. 3. Enter the name of the object, like LAN. Click Add, and

then Save changes. 4. After you have added the network object, you have to

configure members under that object. Click the icon under Members.

5. Click Add new to add members. 6. Enter the names of the members. We will use LAN users. 7. Under IP address, select the IP address range. 8. Enter the range of your DHCP address range, since we

would like to apply it to all the users in the network.9. Click Add and then Save changes.10. Till now, we have added all the users of the network, on

which we wish to apply the bandwidth throttling rule. Now we will apply the rule. To do this, click HTTP Proxy and select Bandwidth throttling.

11. This setting will be used to set the total amount of bandwidth that a single client can use. Click Enable per client limit.

12. Enter the Maximum unlimited size per client, to be set as a limit for a user under the network object. Enter ‘50 MB’. A client can now download a 50 MB file with maximum speed, but if the client tries to download a file of a greater size than the specified limit, the throttling rule will limit the speed to the maximum download rate per client. This speed option is set in the next step.

13. Enter the maximum download rate per client (for our example, enter 20). This means that if the download reaches the threshold, the speed will be decreased to 20 KBps.

14. Click Add and Save changes.

Traffic shaping set-upWith bandwidth throttling, we have set the upper limit for

downloads, but to effectively manage our bandwidth we have to use the Traffic shaping module. Follow the steps shown below:

1. Click on Traffic shaping under the Gateway section.2. Click on Rules. This will display two sections: rules for

internal interfaces and rules for external interfaces.3. Follow the example rules given in Table 1-- these can be

used to shape the bandwidth on eth1.

70 | july 2014 | OPEN SOuRCE FOR yOu | www.OpenSourceForu.com www.OpenSourceForu.com | OPEN SOuRCE FOR yOu | july 2014 | 71

Page 71: Open Source to You - August 2014

AdminHow To

You can mail us at [email protected] You can send this form to ‘The Editor’, OSFY, D-87/1, Okhla Industrial Area, Phase-1, New Delhi-20. Phone No. 011-26810601/02/03, Fax: 011-26817563

None

OSFY?

Customer Feedback Form Open Source For You

The rules mentioned in Table 1 will set protocols with their priority over other protocols, for guaranteed speed. 4. The rules given in Table 2 will manage the upload speed

for the protocols on eth0.5. After adding all the rules, click on Save changes. 6. With these steps, you have set the priorities of the

protocols and applications. One last thing to be done here is to set the upload and download rates of the server. To do this, click Interface rates under Traffic Shaping.

7. Click Action. Change the upload and download speed of the server, supplied by your service provider. Click Change and then Save changes.

Setting up Users and ComputersSetting up of groups and users can be done as follows.Group set-up: For this, follow the steps shown below.1. Click Users and Computers under the Office section. 2. Click Manage. Select groups from the LDAP tree. Click

on the plus sign to add groups.

Table 2

Based on the firewall

Service Source Destination Priority Guaranteed rate (KBps)

Limited rate (KBps)

Yes Any Any 7 0 10

No (Prioritise small packets)

- - - 0 60 200

By: Gaurav Parashar

The author is a FOSS enthusiast, and loves to work with open source technologies like Moodle and Ubuntu. He works as an assistant dean (for IT students) at Inmantec Institutions, Ghaziabad, UP. He can be reached at [email protected]

[1] http://en.wikipedia.org/wiki/Proxy_server#Transparent_proxy[2] http://en.wikipedia.org/wiki/Bandwidth_throttling[3] http://doc.zentyal.org/en/qos.html

References

Users’ set-up: To set up users for the domain system and captive portal, follow the steps shown below.1. Click Users and Computers under the Office section. 2. Click Manage. Here you will see the LDAP tree. Select

Users and click on the plus sign.With all the information entered and passed, users can

log in to the system through the captive portal.

Table 1

Based on the firewall

Service Source Destination Priority Guaranteed rate (KBps)

Limited rate (KBps)

Yes Any Any 2 512 0

Yes Any Any 1 512 0

Yes Any Any 3 1024 2048

Yes Any Any 3 1024 2048

Yes Any Any 7 0 10

70 | july 2014 | OPEN SOuRCE FOR yOu | www.OpenSourceForu.com www.OpenSourceForu.com | OPEN SOuRCE FOR yOu | july 2014 | 71

Page 72: Open Source to You - August 2014
Page 73: Open Source to You - August 2014

since Type 2 hypervisors depend on an OS, they are not in full control of the end user’s machine.

Hypervisor Type 1 products• VMwareESXi• CitrixXen• KVM(KernelVirtualMachine)• Hyper-V

Hypervisor Type 2 products• VMwareWorkstation• VirtualBox

Table 1Hypervisors and their cloud service providers

Hypervisor Cloud service providerXen AmazonEC2

IBMSoftLayerFujitsuGlobalCloudPlatformLinodeOrionVM

ESXi VMwareCloud

KVM RedHatHPDellRackspace

Hyper-V MicrosoftAzure

Data centres and uptime tier levelsJust as a virtual machine is mandatory for cloud computing, the data centre is also an essential part of the technology. All the cloud computing infrastructure is located in remote data centres where resources like computer systems and associated components, such as telecommunications and storage systems, reside. Data centres typically include redundant or backup power supplies, redundant data communications connections, environmental controls, air conditioning, fire suppression systems as well as security devices.

The tier level is the rating or evaluation aspect of the data centres. Large data centres are used for industrial scale operations that can use as much electricity as a small town. The standards comprise a four-tiered scale, with Tier 4 being the most robust and full-featured (Table 2).

Cloud simulationsCloud service providers charge users depending upon the space or service provided.

In R&D, it is not always possible to have the actual cloud infrastructure for performing experiments. For any research scholar, academician or scientist, it is not feasible to hire cloud services every time and then execute their algorithms or implementations.

For the purpose of research, development and testing, open source libraries are available, which give the feel of cloud services. Nowadays, in the research market, cloud simulators are widely used by research scholars and practitioners, without the need to pay any amount to a cloud service provider.

� Windows as the host OS• VMware workstation (Any guest OS)• VirtualBox (Any guest OS)• Hyper-V (Any guest OS)

� Linux as the host OS• VMware workstation• Microsoft virtual PC• VMLite workstation• VirtualBox • XenA hypervisor or virtual machine monitor (VMM) is a piece

of computer software, firmware or hardware that creates and runs virtual machines. A computer on which a hypervisor is running one or more VMs is defined as a host machine. Each VM is called a guest machine. The hypervisor presents the guest OSs with a virtual operating platform, and manages the execution of the guest operating systems. Multiple instances of a variety of operating systems may share the virtualised hardware resources.

Hypervisors of Type 1 (bare metal installation) and Type 2 (hosted installation)When implementing and deploying a cloud service, Type 1 hypervisors are used. These are associated with the concept of bare metal installation. It means there is no need of any host operating system to install the hypervisor. When using this technology, there is no risk of corrupting the host OS. These hypervisors are directly installed on the hardware without the need for any other OS. Multiple VMs are created on this hypervisor.

A Type 1 hypervisor is a type of client hypervisor that interacts directly with hardware that is being virtualised. It is completely independent of the operating system, unlike a Type 2 hypervisor, and boots before the OS. Currently, Type 1 hypervisors are being used by all the major players in the desktop virtualisation space, including but not limited to VMware, Microsoft and Citrix.

The classical virtualisation software or Type 2 hypervisor is always installed on any host OS. If the host OS gets corrupt or crashes for any reason, the virtualisation software or Type 2 hypervisor will also crash and, obviously, all VMs and other resources will be lost. That’s why the hypervisor technology or bare metal installation is very popular in the cloud computing world.

Type 2 (hosted) hypervisors execute within a conventional OS environment. With the hypervisor layer as a distinct second software level, guest OSs run at the third level above the hardware. A Type 2 hypervisor is a type of client hypervisor that sits on top of an OS. Unlike a Type 1 hypervisor, a Type 2 hypervisor relies heavily on the operating system. It cannot boot until the OS is already up and running, and if for any reason the OS crashes, all end users are affected. This is a big drawback of Type 2 hypervisors, as they are only as secure as the OS on which they rely. Also,

AdminInsight

www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 73

Page 74: Open Source to You - August 2014

Using cloud simulators, researchers can execute their algorithmic approaches on a software-based library and can get the results in different parameters including energy optimisation, security, integrity, confidentiality, bandwidth, power and many others.

Tasks performed by cloud simulatorsThe following tasks can be performed with the help of cloud simulators:

• Modelling and simulation of large scale cloud computing data centres

• Modelling and simulation of virtualised server hosts, with customisable policies for provisioning host resources to VMs

• Modelling and simulation of energy-aware computational resources

• Modelling and simulation of data centre network topologies and message-passing applications

• Modelling and simulation of federated clouds • Dynamic insertion of simulation elements, stopping

and resuming simulation • User-defined policies for allocation of hosts to VMs,

and policies for allotting host resources to VMs

Scope and features of cloud simulationsThe scope and features of cloud simulations include:

• Data centres• Load balancing• Creation and execution of cloudlets• Resource provisioning• Scheduling of tasks• Storage and cost factors• Energy optimisation, and many others

Cloud simulation tools and pluginsCloud simulation tools and plugins include:

• CloudSim

• CloudAnalyst • GreenCloud • iCanCloud • MDCSim • NetworkCloudSim • VirtualCloud • CloudMIG Xpress• CloudAuction • CloudReports • RealCloudSim • DynamicCloudSim • WorkFlowSim

CloudSimCloudSim is a famous simulator for cloud parameters developed in the CLOUDS Laboratory, at the Computer Science and Software Engineering Department of the University of Melbourne.

The CloudSim library is used for the following operations: � Large scale cloud computing at data centres � Virtualised server hosts with customisable policies � Support for modelling and simulation of large scale cloud

computing data centres � Support for modelling and simulation of virtualised server

hosts, with customisable policies for provisioning host resources to VMs

� Support for modelling and simulation of energy-aware computational resources

� Support for modelling and simulation of data centre network topologies and message-passing applications

� Support for modelling and simulation of federated clouds � Support for dynamic insertion of simulation elements, as

well as stopping and resuming simulation � Support for user-defined policies to allot hosts to VMs,

and policies for allotting host resources to VMs � User-defined policies for allocation of hosts to virtual

machines

Table 2

Tier Level

RequirementsPossible unavailability in a given year

1 • Singlenon-redundantdistributionpathservingtheITequipment• Non-redundantcapacitycomponents• Basicsiteinfrastructurewithexpectedavailabilityof99.671percent

1729.224minutes(28.8hours)

2 • MeetsorexceedsallTier1requirements• Redundantsiteinfrastructurecapacitycomponentswithexpectedavailabilityof99.741per

cent

1361.304minutes(22.6hours)

3 • MeetsorexceedsallTier1andTier2requirements• MultipleindependentdistributionpathsservingtheITequipment• AllITequipmentmustbedual-poweredandfullycompatiblewiththetopologyofasite’s

architecture• Concurrentlymaintainablesiteinfrastructurewithexpectedavailabilityof99.982percent

94.608minutes(1.5hours)

4 • MeetsorexceedsallTier1,Tier2andTier3requirements• Allcoolingequipmentisindependentlydual-powered,includingchillers,heaters,ventilation

andair-conditioning(HVAC)systems• Fault-tolerantsiteinfrastructurewithelectricalpowerstorageanddistributionfacilitieswith

expectedavailabilityof99.995percent

26.28minutes(0.4hours)

Admin Insight

74 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

Page 75: Open Source to You - August 2014

The major limitation of CloudSim is the lack of a graphical user interface (GUI). But despite this, CloudSim is still used in universities and the industry for the simulation of cloud-based algorithms.

Downloading, installing and integrating CloudSimCloudSim is free and open source software available at http://www.cloudbus.org/CloudSim/. It is a code library based on Java. This library can be directly used by integrating with the JDK to compile and execute the code.

For rapid applications development and testing, CloudSim is integrated with Java-based IDEs (Integrated Development Environment) including Eclipse or NetBeans.

Using Eclipse or NetBeans IDE, the CloudSim library can be accessed and the cloud algorithm implemented.

The directory structure of the CloudSim toolkit is given below:

CloudSim/ -- CloudSim root directorydocs/ -- API documentationexamples/ -- Examplesjars/ -- JAR archivessources/ -- Source codetests/ -- Unit testsCloudSim needs to be unpacked for installation. To

uninstall CloudSim, the whole CloudSim directory needs to be removed.

There is no need to compile CloudSim source code. The JAR files with the CloudSim package have been provided to compile and run CloudSim applications: � jars/CloudSim-<CloudSimVersion>.jar-- contains the

CloudSim class files � jars/CloudSim-< CloudSimVersion >-sources.jar--

contains the CloudSim source code files � jars/CloudSim-examples-< CloudSimVersion >.jar--

Figure 1: Creating a new Java Project in Eclipse

Figure 2: Assigning a name to the Java Project

Figure 3: Build path for CloudSim library

contains the CloudSim examples class files � jars/CloudSim-examples-< CloudSimVersion >-sources.

jar-- contains the CloudSim examples source code files

Steps to integrate CloudSim with EclipseAfter installing Eclipse IDE, let’s create a new project and integrate CloudSim into it.

1. Create a new project in Eclipse.2. This can be done by File->New->Project->Java

Project

AdminInsight

www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 75

Page 76: Open Source to You - August 2014

3. Give a name to your project.

4. Configure the build path for adding the CloudSim library.

5. Search and select the CloudSim JAR files.

In the integration and implementation of Java code and CloudSim, the Java-based methods and packages can be used. In this approach, the Java library is directly associated with CloudSim code.

After executing the code in Eclipse, the following output

will be generated, which makes it evident that the

integration of the dynamic key exchange is implemented with the CloudSim code:

Starting Cloud Simulation with Dynamic and Hybrid Secured Key

Initialising...

MD5 Hash Digest(in Hex. format)::

6e47ed33cde35ef1cc100a78d3da9c9f

Hybrid Approach (SHA+MD5) Hash Hex format:

b0a309c58489d6788262859da2e7da45b6ac20a052b6e606ed1759648e43e40b

Hybrid Approach Based (SHA+MD5) Security Key Transmitted =>

ygcxsbyybpr4¢ ª¢£?¡® £

Starting CloudSim version 3.0

CloudDatacentre-1 is starting...

CloudDatacentre-2 is starting...

Broker is starting...

Entities started.

0.0: Broker: Cloud Resource List received with 2 resource(s)

0.0: Broker: Trying to Create VM #0 in CloudDatacentre-1

0.0: Broker: Trying to Create VM #1 in CloudDatacentre-1

[VmScheduler.vmCreate] Allocation of VM #1 to Host #0 failed by

MIPS

0.1: Broker: VM #0 has been created in Datacentre #2, Host #0

0.1: Broker: Creation of VM #1 failed in Datacentre #2

0.1: Broker: Trying to Create VM #1 in CloudDatacentre-2

0.2: Broker: VM #1 has been created in Datacentre #3, Host #0

0.2: Broker: Sending cloudlet 0 to VM #0

0.2: Broker: Sending cloudlet 1 to VM #1

0.2: Broker: Sending cloudlet 2 to VM #0

160.2: Broker: Cloudlet 1 received

320.2: Broker: Cloudlet 0 received

320.2: Broker: Cloudlet 2 received

320.2: Broker: All Cloudlets executed. Finishing...

320.2: Broker: Destroying VM #0

320.2: Broker: Destroying VM #1

Broker is shutting down...

Simulation: No more future events

CloudInformationService: Notify all CloudSim entities for

shutting down.

CloudDatacentre-1 is shutting down...

CloudDatacentre-2 is shutting down...

Broker is shutting down...

Simulation completed.

Simulation completed.

============================= OUTPUT ================= ========

Cloudlet ID STATUS Data centre ID VM ID Time Start Time

Finish Time

==============================================================

1 SUCCESS 3 1 160 0.2 160.2

0 SUCCESS 2 0 320 0.2 320.2

2 SUCCESS 2 0 320 0.2 320.2

Cloud Simulation Finish

Simulation Scenario Finish with Successful Matching of the Keys

Simulation Scenario Execution Time in MillSeconds => 5767

Security Parameter => 30.959372773933122

2014-07-09 16:15:21.19

Figure 5: Select all JAR files of CloudSim for integration

Figure 4: Go to the path of CloudSim library

Figure 6: JAR files of CloudSim visible in the referenced libraries of Eclipse with Java Project

Admin Insight

76 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

Page 77: Open Source to You - August 2014

The CloudAnalyst cloud simulatorCloudAnalyst is another cloud simulator that is completely GUI-based and supports the evaluation of social network tools according to the geographic distribution of users and data centres.

Communities of users and data centres supporting the social networks are characterised and based on their location. Parameters such as user experience while using the social network application and the load on the data centre are obtained/logged.

CloudAnalyst is used to model and analyse real world problems through case studies of social networking applications deployed on the cloud.

The main features of CloudAnalyst are: � User friendly graphical user interface (GUI) � Simulation with a high degree of configurability and

flexibility � Performs different types of experiments with repetitions � Connectivity with Java for extensions

The GreenCloud cloud simulatorGreenCloud is also getting famous in the international market as the cloud simulator that can be used for energy-aware cloud computing data centres with the main focus on cloud communications. It provides the features for detailed fine-grained modelling of the energy consumed by the data centre IT equipment like the servers, communication switches and communication links. GreenCloud simulator allows researchers to investigate, observe, interact and measure the cloud’s performance based on multiple parameters. Most of the code of GreenCloud is written in C++. TCL is also included in the library of GreenCloud.

GreenCloud is an extension of the network simulator ns-2 that is widely used for creating and executing network scenarios. It provides the simulation environment that enables energy-aware cloud computing data centres. GreenCloud mainly focuses on the communications within a cloud. Here, all of the processes related to communication are simulated at the packet level.

Figure 7: Create a new Java program for integration with CloudSim

Figure 8: Writing the Java code with the import of CloudSim packages

Figure 9: Execution of the Java code integrated with CloudSim

By: Anil Kumar PugaliaTheauthorisassociatedwithvariousacademicandresearchinstitutes,deliveringlecturesandconductingtechnicalworkshopsonthelatesttechnologiesandtools.Contacthimatkumargaurav.in@gmail.com

By: Dr Gaurav Kumar

This monthly B2B Newspaper is a resource for traders, distributors, dealers, and those who head channel business, as it aims to give an impetus to channel sales

Get North, East, West & South Edition at you doorstep. Write to us at [email protected] and get EB Times regularly

EB TimEsis Becoming Regional• Electronics • Trade Channel • Updates

AdminInsight

www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 77

Page 78: Open Source to You - August 2014

a single host. LXC does this by using kernel level name space, which helps to isolate containers from the host. Now questions might arise about security. If I am logged in to my container as the root user, I can hack my base OS; so is it not secured? This is not the case because the user name space separates the users of the containers and the host, ensuring that the container root user does not have the root privilege to log in to the host OS. Likewise, there are the process name space and the network name space, which ensure that the display and management of the processes run in the container but not on the host and the network container, which has its own network device and IP addresses.

Cgroups Cgroups, also known as control groups, help to implement resource accounting and limiting. They help to limit resource utilisation or consumption by a container such as memory, the CPU and disk I/O, and also provide metrics around resource consumption on various processes within the container.

Copy-on-write filesystemDocker leverages a copy-on-write filesystem (currently AUFS, but other filesystems are being investigated). This allows Docker to spawn containers (to put it simply—instead of having to make full copies, it basically uses ‘pointers’ back to existing files).

Technology is changing faster than styles in the fashion world, and there are many new entrants specific to the open source, cloud, virtualisation and DevOps

technologies. Docker is one of them. The aim of this article is to give you a clear idea of Docker, its architecture and its functions, before getting started with it.

Docker is a new open source tool based on Linux container technology (LXC), designed to change how you think about workload/application deployments. It helps you to easily create light-weight, self-sufficient, portable application containers that can be shared, modified and easily deployed to different infrastructures such as cloud/compute servers or bare metal servers. The idea is to provide a comprehensive abstraction layer that allows developers to ‘containerise’ or ‘package’any application and have it run on any infrastructure.

Docker is based on container virtualisation and it is not new. There is no better tool than Docker to help manage kernel level technologies such as LXC, cgroups and a copy-on-write filesystem. It helps us manage the complicated kernel layer technologies through tools and APIs.

What is LXC (Linux Container)? I will not delve too deeply into what LXC is and how it works, but will just describe some major components.

LXC is an OS level virtualisation method for running multiple isolated Linux operating systems or containers on

Docker is an open source project, which packages applications and their dependencies in a virtual container that can run on any Linux server. Docker has immense possibilities

as it facilitates the running of several OSs on the same server.

Admin How To

78 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 79

Page 79: Open Source to You - August 2014

AdminHow To

Containerisation vs virtualisationWhat is the rationale behind the container-based approach or how is it different from virtualisation? Figure 2 speaks for itself.

Containers virtualise at the OS level, whereas both Type-I and Type-2 hypervisor-based solutions virtualise at the hardware level. Both virtualisation and containerisation are a kind of virtualisation; in the case of VMs, a hypervisor (both for Type-I and Type-II) slices the hardware, but containers make available protected portions of the OS. They effectively virtualise the OS. If you run multiple containers on the same host, no container will come to know that it is sharing the same resources because each container has its own abstraction.LXC takes the help of name spaces to provide the isolated regions known as containers. Each container runs in its own allocated name space and does not have access outside of it. Technologies such as cgroups, union filesystems and container formats are also used for different purposes throughout the containerisation.

Linux containersUnlike virtual machines, with the help of LXC you can share multiple containers from a single source disk OS image. LXC is very lightweight, has a faster start-up and needs less resources.

Installation of Docker Before we jump into the installation process, we should be aware of certain terms commonly used in Docker documentation.

Image: An image is a read-only layer used to build a container.

Container: This is a self-contained runtime environment that is built using one or more images. It also allows us to commit changes to a container and create an image.

Docker registry: These are the public or private servers, where anyone can upload their repositories so that they can be easily shared.

The detailed architecture is outside the scope of this article. Have a look at http://docker.io for detailed information.

Note: I am using CentOS, so the following instructions are applicable for CentOS 6.5.

Docker is part of Extra Packages for Enterprise Linux (EPEL), which is a community repository of non-standard packages for the RHEL distribution. First, we need to install the EPEL repository using the command shown below:

[root@localhost ~] # rpm -ivh http://dl.fedoraproject.org/

pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

As per the best practice update,

[root@localhost ~] # yum update -y

docker-io is the package that we need to install. As I am using CentOS, Yum is my package manager; so depending on your distribution ensure that the correct command is used, as shown below:

[root@localhost ~] # yum -y install docker-io

Once the above installation is done, start the Docker service with the help of the command below:

[root@localhost ~] # service docker start

To ensure that the Docker service starts at each reboot, use the following command:

[root@localhost ~] # chkconfig docker on

Figure 1: Linux Container

Figure 2: Virtualisation

libvirt Systemd-nspawn

Linux Kernel

LXC

netlink netfilter SELinux

namespace

Hardware (Intel, AMD)

apparmorcgroups

libcontainer

78 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 79

Page 80: Open Source to You - August 2014

Admin How To

[1] Docker: https://docs.docker.com/[2] LXC: https://linuxcontainers.org/

References

By: Pradyumna Dash

The author is an independent consultant, and works as a cloud/ DevOps architect. An open source enthusiast, he loves to cook good food and brew ideas. He is also the co-founder of the site http:/www.sillycon.org/

To check the Docker version, use the following command:

[root@localhost ~] # docker version

How to create a LAMP stack with DockerWe are going to create a LAMP stack on a CentOS VM. However, you can work on different variants as well. First, let’s get the latest CentOS image. The command below will help us to do so:

[root@localhost ~] # docker pull centos:latest

Next, let’s make sure that we can see the image by running the following code:

[root@localhost ~] # docker image centos

REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE

centos latest 0c752394b855 13 days ago

124.1 MB

Running a simple bash shell to test the image also helps you to start a new container:

[root@localhost ~] # docker run -i -t centos /bin/bash

If everything is working properly, you'll get a simple bash prompt. Now, as this is just a base image, we need to install the PHP, MySQL and the LAMP stack:

[root@localhost ~] # yum install php php-mySQL mySQL-server

httpd

The container now has the LAMP stack. Type ‘exit’ to quit from the bash shell.

We are going to create this as a golden image, so that the next time we need another LAMP container, we don’t need to install it again.

Run the following command and please note the ‘CONTAINER ID’ of the image. In my case, the ID was ‘4de5614dd69c’:

[root@localhost ~] # docker ps -a

The ID shown in the listing is used to identify the container you are using, and you can use this ID to tell Docker to create an image.

Run the command below to make an image of the previously created LAMP container. The syntax is docker commit <CONTAINER ID> <name>. I have used the previous container ID, which we got in the earlier step:

[root@localhost ~] # docker commit 4de5614dd69c lamp-image

Run the following command to see your new image in the list. You will find the newly created image ‘lamp-image’ is shown in the output:

[root@localhost ~] # docker images

REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE

lamp-image latest b71507766b2d 2 minutes ago

339.7 MB

centos latest 0c752394b855 13 days ago

124.1 MB

Let’s log in to this image/container to check the PHP version:

[root@localhost ~] # docker run -i -t lamp-image /bin/bash

bash-4.1# php -v

PHP 5.3.3 (cli) (built: Dec 11 2013 03:29:57)

Zend Engine v2.3.0 Copyright (c) 1998-2010 Zend Technologies

Now, let us configure Apache. Log in to the container and create a file called index.html.If you don’t want to install VI or VIM, use the Echo

command to redirect the following content to the index.html file:

<?php echo “Hello world”; ?>

Start the Apache process with the following command:

[root@localhost ~] # /etc/init.d/httpd start

And then test it with the help of browser/curl/links utilities.If you’re running Docker inside a VM, you’ll need to

forward port 80 on the VM to another port on the VM’s host machine. The following command might help you to configure port forwarding. Docker has the feature to forward ports from containers to the host.

[root@localhost ~] # docker run -i -t -p :80 lamp-image /

bin/bash

For detailed information on Docker and other technologies related to container virtualisation, check out the links given under ‘References’.

80 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | PB

Page 81: Open Source to You - August 2014

AdminHow To

Wireshark: Essential for a Network Professional’s Toolbox

Wireshark. So start the capture and once you have sufficient packets, stop and view the packets before you continue reading.

An interesting observation about this capture is that, unlike only broadcast and host traffic in a switched environment, it contains packets from all source IP addresses connected in the network. Did you notice this?

The traffic thus contains: � Broadcast packets � Packets from all systems towards the Internet � PC-to-PC communication packets � Multicast packets

Now, at this point, imagine analysing traffic captured from hundreds of computers in a busy network—the sheer volume of captured packets will be baffling. Here, an important Wireshark

The first article in the Wireshark series, published in the July 2014 issue of OSFY, covered Wireshark architecture, its installation on Windows and Ubuntu, as well as various

ways to capture traffic in a switched environment. Interpretation of DNS and ICMP Ping protocol captures was also covered. Let us now carry the baton forward and understand additional Wireshark features and protocol interpretation.

To start with, capture some traffic from a network connected to an Ethernet hub—which is the simplest way to capture complete network traffic.

Interested readers may purchase an Ethernet hub from a second hand computer dealer at a throwaway price and go ahead to capture a few packets in their test environment. The aim of this is to acquire better hands-on practice of using

This article, the second in the series, presents further experiments with Wireshark, the open source packet analyser. In this part, Wireshark will be used to analyse packets captured from an Ethernet hub.

PB | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 81

Page 82: Open Source to You - August 2014

Admin How To

feature called ‘Display Filter' can be used very effectively.

Wireshark’s Display FilterThis helps to sort/view the network traffic using various parameters such as the traffic originating from a particular IP or MAC address, traffic with a particular source or destination port, ARP traffic and so on. It is impossible to imagine Wireshark without display filters!

Click on ‘Expressions’ or go to ‘Analyse – Display filters’ to find a list of pre-defined filters available with Wireshark. You can create custom filters depending upon the analysis requirements—the syntax is really simple.

As seen in Figure 2, the background colours of the display filter box offer ready help while creating proper filters. A green background indicates the correct command or syntax, while a red background indicates an incorrect or incomplete command. Use these background colours to quickly identify syntax and gain confidence in creating the desired display filters.

A few simple filters are listed below:tcp: Displays TCP traffic onlyarp: Displays ARP trafficeth.addr == aa:bb:cc:dd:ee:ff: Displays traffic where the

Ethernet MAC address is aa:bb:cc:dd:ee:ffip.src == 192.168.51.203: Displays traffic where the

source IP address is 192.168.51.203ip.dst == 4.2.2.1: Displays traffic where the destination IP

address is 4.2.2.1ip.addr == 192.168.51.1: Displays traffic where the

source or the destination IP address is 192.168.51.1Click on ‘Save’ to store the required filter for future use. By

default, the top 10 custom filters created are available for ready use under the dropdown menu of the ‘Filter’ dialogue box.

With this background, let us look at two simple protocols —ARP and DHCP.

Address Resolution Protocol (ARP)This is used to find the MAC address from the IP address. It works in two steps—the ARP request and ARP reply. Here are the details.

Apply the appropriate display filter (ARP) and view only ARP traffic from the complete capture. Also, refer to Figure 3 - the ARP protocol.

The protocol consists of the ARP request and ARP reply.ARP request: This is used to find the MAC address of a

system with a known IP address. For this, an ARP request is sent as a broadcast towards the MAC broadcast address:

Sender MAC address – 7c:05:07:ad:42:53

Sender IP address – 192.168.51.208

Target MAC address – 00:00:00:00:00:00

Target IP address – 192.168.51.1

Note: Target IP address indicates the IP address for which the MAC address is requested.

Wireshark displays the ARP request under the‘Info’ box as: Who has 192.168.51.1? tell 192.168.51.208

ARP reply: This ARP request broadcast is received by all systems connected to the network segment of the sender (below the router), mind well, this broadcast also reach router port connected to this segment.

The system with the destination IP address mentioned in the ARP request packet replies with its MAC address via an ARP reply. The important contents of the ARP reply are:

Figure 1: Traffic captured using HUB

Figure 2: Default Wireshark display filters

Figure 3: ARP protocol

82 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 83

Page 83: Open Source to You - August 2014

AdminHow To

Sender MAC Address – Belonging to system which replies to the

ARP request Updated by the system – 00:21:97:88:28:21

Sender IP Address – Belonging to system which replies to the

ARP request – 192.168.51.1

Target MAC Address – Source MAC of ARP request packet –

7c:05:07:ad:42:53

Target IP Address – Source IP address of the ARP request

packet – 192.168.51.208

Wireshark displays the ARP reply under the ‘Info’ box as: 192.168.51.1 is at 00:21:97:88:28:21.

Thus, with the help of an ARP request and reply, system 192.168.51.208 has detected the MAC address belonging to 192.168.51.1.

Dynamic Host Configuration Protocol (DHCP)This protocol saves a lot of time for network engineers by offering a unique dynamic IP address to a system without an IP address, which is connected in a network. This also helps to avoid IP conflicts (the use of one IP address by multiple systems) to a certain extent. The computer users also benefit by the ability to connect to various networks without knowing the corresponding IP address range and the unused IP address.

This DHCP protocol consists of four phases—DHCP discover, DHCP offer, DHCP request and DHCP ACK. Let us understand the protocol and interpret how these packets are seen in Wireshark.

Figure 4: DHCP protocol

82 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 83

Page 84: Open Source to You - August 2014

Admin How To

By: Rajesh Deodhar

The author has been an IS auditor and network security consultant-trainer for the last two decades. He is a BE in Industrial Electronics, and holds CISA, CISSP, CCNA and DCL certifications. He can be contacted at [email protected]

1. The DHCP Server Identifier field, which specifies the IP address of the accepted server.

2. The host name of the client computer.Use Pane 2 of Wireshark to view these parameters under

‘Bootstrap Protocol’ – Options 54 and 12.The DHCP request packet also contains additional

client requests for the server to provide more configuration parameters such as the default gateway, DNS (Domain Name Server), address, etc.

DHCP acknowledgement: The server acknowledges a DHCP request by sending information on the lease duration and other configurations, as requested by the client during the DHCP request phase, thus completing the DHCP cycle.

For better understanding, capture a few packets, use Wireshark ‘Display Filters’ to filter and view ARP and DHCP, and read them using Wireshark panes.

Saving packetsPackets captured using Wireshark can be saved from the menu ‘File – Save as’ in different formats such as Wireshark, Novell LANalyzer and Sun Snoop, to name a few.

In addition to saving all captured packets in various file formats, the ‘File – Export Specified Packets’ option offers users the choice of saving ‘Display Filtered’ packets or a range of packets.

Please feel free to download the pcap files used for preparing this article from opensourceforu.com. I believe all OSFY readers will enjoy this interesting world of Wireshark, packet capturing and various protocols!

Troubleshooting tipsCapturing ARP traffic could reveal ARP poisoning (or ARP spoofing) in the network. This will be discussed in more detail at a later stage. Similarly, studying the capture of DHCP protocol may lead to the discovery of an unintentional or a rogue DHCP server within the network.

A word of cautionPackets captured using the test scenarios described in this series of articles are capable of revealing sensitive information such as login names and passwords. Some scenarios, such as using ARP spoofing may disrupt the network temporarily. Make sure to use these techniques only in a test environment. If at all you wish to use them in a live environment, do not forget to get the explicit written permission before doing so.

When a system configured with the ‘Obtain an IP address automatically’ setting is connected to a network, it uses DHCP to get an IP address from the DHCP server. Thus, this is a client–server protocol. To capture DHCP packets, users may start Wireshark on such a system, then start packet capture and, finally, connect the network cable.

Please refer to Figures 4 and 5, which give a diagram and a screenshot of the DHCP protocol, respectively.

Discovering DHCP servers: To discover DHCP server(s) in the network, the client sends a broadcast on 255.255.255.255 with the source IP as 0.0.0.0, using UDP port 68 (bootpc) as the source port and UDP 67 (bootps) as the destination. This message also contains the source MAC address as that of the client and ff:ff:ff:ff:ff:ff as the destination MAC.

A DHCP offer: The nearest DHCP server receives this ‘discover’ broadcast and replies with an offer containing the offered IP address, the subnet mask, the duration of the default gateway lease and the IP address of the DHCP server. The source MAC address is that of the DHCP server and the destination MAC address is that of the requesting client. Here, the UDP source and destination ports are reversed.

DHCP requests: Remember that there can be more than one DHCP server in a network. Thus, a client can receive multiple DHCP offers. The DHCP request packet is broadcast by the client with parameters similar to discovering a DHCP server, with two major differences:

Figure 5: Screenshot of DHCP protocol

84 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | PB

Page 85: Open Source to You - August 2014

Open GurusLet's Try

Applications and Framework

Binder IPC

Android System Services

HAL

Linux Kernel

System ServicesMedia Services

AudioFlinger

CameraService

MediaPlayerService

Other MediaServices

Search Service

ActivityManager

WindowManager

Camera HAL Audio HAL Graphics HAL Other HALs

Camera DriverAudio Driver(ALSA, OSS,

etc)Display Drivers

� Application framework: Applications written in Java directly interact with this layer.

� Binder IPC: It is an Android-specific IPC mechanism. � Android system services: To access the underlying

hardware application framework, APIs often communicate via system services.

� HAL: This acts as a glue between the Android system and the underlying device drivers.

� Linux kernel: At the bottom of the stack is a Linux kernel, with some architectural changes/additions including binder, ashmem, pmem, logger, wavelocks, different out-of-memory (OOM) handling, etc.In this article, I describe how to compile the kernel for the

Samsung Galaxy Star Duos (GT-S5282) with Android version 4.1.2. The build process was performed on an Intel i5 core processor running 64-bit Ubuntu Linux 14.04 LTS (Trusty Tahr). However, the process should work with any Android kernel and device, with minor modifications. The handset details are shown in the screenshot (Figure 2) taken from the Setting ->About device menu of the phone.

Many of us are curious and eager to learn how to port or flash a new version of Android to our phones and tablets. This article is the first step

towards creating your own custom Android system. Here, you will learn to set up the build environment for the Android kernel and build it on Linux.

Let us start by understanding what Android is. Is it an application framework or is it an operating system? It can be called a mobile operating system based on the Linux kernel, for the sake of simplicity, but it is much more than that. It consists of the operating system, middleware, and application software that originated from a group of companies led by Google, known as the Open Handset Alliance.

Android system architectureBefore we begin building an Android platform, let’s understand how it works at a higher level. Figure 1 illustrates how Android works at the system level.

We will not get into the finer details of the architecture in this article since the primary goal is to build the kernel. Here is a quick summary of what the architecture comprises.

Tired of stock ROMs? Build and flash your own version of Android on your smartphone. This new series of articles will see you through from compiling your kernel to flashing it on your phone.

Building the Android Platform: Compile the Kernel

Figure 1: Android system architecture

Other SystemServices and

Managers

Other Drivers

Figure 2: Handset details for GT-S5282

PB | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 85

Page 86: Open Source to You - August 2014

Open Gurus Let's Try

System and software requirementsBefore you download and build the Android kernel, ensure that your system meets the following requirements: � Linux system (Linux running on a virtual machine will

also work but is not recommended). Steps explained in this article are for Ubuntu 14.04 LTS to be specific. Other distributions should also work.

� Around 5 GB of free space to install the dependent software and build the kernel.

� Pre-built tool-chain. � Dependent software should include GNU Make,

libncurses5-dev, etc. � Android kernel source (as mentioned earlier, this

article describes the steps for the Samsung Galaxy Star kernel).

� Optionally, if you are planning to compile the whole Android platform (not just the kernel), a 64-bit system is required for Gingerbread (2.3.x) and newer versions.It is assumed that the reader is familiar with Linux

commands and the shell. Commands and file names are case sensitive. Bash shell is used to execute the commands in this article.

Step 1: Getting the source codeThe Android Open Source Project (AOSP) maintains the complete Android software stack, which includes everything except for the Linux kernel. The Android Linux kernel is developed upstream and also by various handset manufacturers.

The kernel source can be obtained from:1. Google Android kernel sources: Visit https://source.

android.com/source/building-kernels.html for details. The kernel for a select set of devices is available here.

2. From the handset manufacturers or OEM website: I am listing a few links to the developer sites where you can find the kernel sources. Please understand that the links may change in the future.

� Samsung: http://opensource.samsung.com/ � HTC: https://www.htcdev.com/ � Sony: Most of the kernel is available on github.

3. Developers: They provide a non-official kernel.This article will use the second method—we will get

the official Android kernel for Samsung Galaxy Star (GT-S5282). Go to the URL http://opensource.samsung.com/ and search for GT-S5282. Download the file GT-S5282_SEA_JB_Opensource.zip (184 MB).

Let’s assume that the file is downloaded in the ~/Downloads/kernel directory.

Step 2: Extract the kernel source codeLet us create a directory ‘android’ to store all relevant files in the user's home directory. The kernel and Android NDK will be stored in the kernel and ndk directories,

respectively.

$ mkdir ~/android

$ mkdir ~/android/kernel

$ mkdir ~/android/ndk

Now extract the archive:

$ cd ~/Downloads/kernel

$ unzip GT-S5282_SEA_JB_Opensource.zip

$ tar -C ~/android/kernel -zxf Kernel.tar.gz

The unzip command will extract the zip archive, which contains the following files: � Kernel.tar.gz: The kernel to be compiled. � Platform.tar.gz: Android platform files. � README_Kernel.txt: Readme for kernel compilation. � README_Platform.txt: Readme for Android platform

compilation.If the unzip command is not installed, you can extract

the files using any other file extraction tool.By running the tar command, we are extracting the

kernel source to ~/android/kernel. While creating a sub-directory for extracting is recommended, let’s avoid it here for the sake of simplicity.

Step 3: Install and set up the toolchainThere are several ways to install the toolchain. We will use the Android NDK to compile the kernel.

Please visit https://developer.android.com/tools/sdk/ndk/index.html to get details about NDK.

For 64-bit Linux, download Android NDK android-ndk-r9-linux-x86_64-legacy-toolchains.tar.bz2 from http://dl.google.com/android/ndk/android-ndk-r9-linux-x86_64-legacy-toolchains.tar.bz2

Ensure that the file is saved in the ~/android/ndk directory.

Note: To be specific, we need the GCC 4.4.3 version to compile the downloaded kernel. Using the latest version of Android NDK will yield to compilation errors.

Extract the NDK to ~/android/ndk:

$ cd ~/android/ndk

# For 64 bit version

$ tar -jxf android-ndk-r9-linux-x86_64-legacy-

toolchains.tar.bz2

Add the toolchain path to the PATH environment variable in .bashrc or the equivalent:

86 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 87

Page 87: Open Source to You - August 2014

Open GurusLet's Try

#Set the path for Android build env (64 bit)

export PATH=${HOME}/android/ndk/android-ndk-r9/toolchains/

arm-linux-androideabi-4.4.3/prebuilt/linux-x86_64/

bin:$PATH

Step 4: Configure the Android kernelInstall the necessary dependencies, as follows:

$ sudo apt-get install libncurses5-dev build-essential

Set up the architecture and cross compiler, as follows:

$ export ARCH=arm

$ export CROSS_COMPILE=arm-linux-androideabi-

The kernel Makefile refers to the above variables to select the architecture and cross compile. The cross compiler command will be ${CROSS_COMPILE}gcc which is expanded to arm-linux-androideabi-gcc. The same applies for other tools like g++, as, objdump, gdb, etc.

Configure the kernel for the device:

$ cd ~/android/kernel

$ make mint-vlx-rev03_defconfig

The device-specific configuration files for ARM architecture are available in the arch/arm/configs directory.

Executing the configuration command may throw a few warnings. You can ignore these warnings now. The command will create a .config file, which contains the kernel configuration for the device.

To view and edit the kernel configuration, run the following command:

$ make menuconfig

Next, let’s assume you want to change lcd overlay support.

Navigate to Drivers → Graphics → Support for framebuffer devices. The option to support lcd overlay should be displayed as shown in Figure 3.

Skip the menuconfig step or do not make any changes if

you are unsure.

Step 5: Build the kernelFinally, we are ready to fire the build. Run the make command, as follows:

$ make zImage

If you want to speed up the build, specify the -j option to the make command. For example, if you have four processor cores, you can specify the -j4 option to make:

$ make -j4 zImage

The compilation process will take time to complete, based on the options available in the kernel configuration (.config) and the performance of the build system. On completion, the kernel image (zImage) will be generated in the arch/arm/boot/ directory of the kernel source.

Compile the modules:

$ make modules

This will trigger the build for kernel modules, and .ko files should be generated in the corresponding module directories. Run the find command to get a list of .ko files in the kernel directory:

$ find . -name “*.ko”

What next?Now that you have set up the Android build environment, and compiled an Android kernel and necessary modules, how do you flash it to the handset so that you can see the kernel working? This requires the handset to be rooted first, followed by flashing the kernel and related software. It turns out that there are many new concepts to understand before we get into this. So be sure to follow the next article on rooting and flashing your custom Android kernel.

By: Mubeen Jukaku

Mubeen is technology head at Emertxe Information Technologies (http://www.emertxe.com). His area of expertise is the architecture and design of Linux-based embedded systems. He has vast experience in kernel internals, device drivers and application porting, and is passionate about leveraging the power of open source for building innovative products and solutions. He can be reached at [email protected]

https://source.android.com/https://developer.android.com/http://xda-university.com

References

Figure 3: Kernel configuration – making changes

86 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 87

Page 88: Open Source to You - August 2014

Lists: The Building Blocks of Maxima

Lists are the basic building blocks of Maxima. The fundamental reason is that Maxima is implemented in Lisp, the building blocks of which are also lists.

To begin with, let us walk through the ways of creating a list. The simplest method to get a list in Maxima is to just define it, using []. So, [x, 5, 3, 2*y] is a list consisting of four members. However, Maxima provides two powerful functions for automatically generating lists: makelist() and create_list().

makelist() can take two forms. makelist (e, x, x0, xn) creates and returns a list using the expression ‘e’, evaluated for ‘x’ using the values ranging from ‘x0’ to ‘xn’. makelist(e, x, L) creates and returns a list using the expression ‘e’, evaluated for ‘x’ using the members of the list L. Check out the example below for better clarity:

$ maxima -q

(%i1) makelist(2 * i, i, 1, 5);

(%o1) [2, 4, 6, 8, 10]

(%i2) makelist(concat(x, 2 * i - 1), i, 1, 5);

(%o2) [x1, x3, x5, x7, x9]

(%i3) makelist(concat(x, 2), x, [a, b, c, d]);

(%o3) [a2, b2, c2, d2]

(%i4) quit();

Note the interesting usage of concat() to just concatenate

its arguments. Note that makelist() is limited by the variation it can have, which to be specific, is just one – ‘i’ in the first two examples and ‘x’ in the last one. If we want more, the create_list() function comes into play.

create_list(f, x1, L1, ..., xn, Ln) creates and returns a list with members of the form ‘f’, evaluated for the variables x1, ..., xn using the values from the corresponding lists L1, ..., Ln. Here is just a glimpse of its power:

$ maxima -q

(%i1) create_list(concat(x, y), x, [p, q], y, [1, 2]);

(%o1) [p1, p2, q1, q2]

(%i2) create_list(concat(x, y, z), x, [p, q], y, [1, 2], z, [a,

b]);

(%o2) [p1a, p1b, p2a, p2b, q1a, q1b, q2a, q2b]

(%i3) create_list(concat(x, y, z), x, [p, q], y, [1, 2, 3], z,

[a, b]);

(%o3) [p1a, p1b, p2a, p2b, p3a, p3b, q1a, q1b, q2a, q2b, q3a,

q3b]

(%i4) quit();

Note that ‘all possible combinations’ are created using the values for the variables ‘x’, ‘y’ and ‘z’.

Once we have created the lists, Maxima provides a host of functions to play around with them. Let’s take a look at these.

This 20th article in our series on Mathematics in Open Source showcases the list manipulations in Maxima, the programming language with an ALGOL-like syntax but Lisp-like semantics.

For U & Me Let’s Try

88 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

Page 89: Open Source to You - August 2014

members in the list L � reverse(L) - returns a list with members of the list L in

reverse order

$ maxima -q

(%i1) L: makelist(i, i, 1, 10);

(%o1) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

(%i2) cons(0, L);

(%o2) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

(%i3) endcons(11, L);

(%o3) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]

(%i4) rest(L);

(%o4) [2, 3, 4, 5, 6, 7, 8, 9, 10]

(%i5) rest(L, 3);

(%o5) [4, 5, 6, 7, 8, 9, 10]

(%i6) rest(L, -3);

(%o6) [1, 2, 3, 4, 5, 6, 7]

(%i7) join(L, [a, b, c, d]);

(%o7) [1, a, 2, b, 3, c, 4, d]

(%i8) delete(6, L);

(%o8) [1, 2, 3, 4, 5, 7, 8, 9, 10]

(%i9) delete(4, delete(6, L));

(%o9) [1, 2, 3, 5, 7, 8, 9, 10]

(%i10) delete(4, delete(6, join(L, L)));

(%o10) [1, 1, 2, 2, 3, 3, 5, 5, 7, 7, 8, 8, 9, 9, 10, 10]

(%i11) L1: rest(L, 7);

(%o11) [8, 9, 10]

(%i12) L2: rest(rest(L, -3), 3);

(%o12) [4, 5, 6, 7]

(%i13) L3: rest(L, -7);

(%o13) [1, 2, 3]

(%i14) append(L1, L2, L3);

(%o14) [8, 9, 10, 4, 5, 6, 7, 1, 2, 3]

(%i15) reverse(L);

(%o15) [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]

(%i16) join(reverse(L), L);

(%o16) [10, 1, 9, 2, 8, 3, 7, 4, 6, 5, 5, 6, 4, 7, 3, 8, 2, 9, 1,

10]

(%i17) unique(join(reverse(L), L));

(%o17) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

(%i18) L;

(%o18) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

(%i19) quit();

Note that the list L is still not modified. For that matter, , even L1, L2, L3 are not modified. In fact, that is what is meant when we state that all these functions recreate new modified lists, rather than modify the existing ones.

List extractionsHere is a set of functions extracting the various members of a list. first(L), second(L), third(L), fourth(L), fifth(L), sixth(L), seventh(L), eight(L), ninth(L), and tenth(L), respectively return the first, second, ... member of the list L. last(L) returns the last

Testing the listsThe following set of functions demonstrates the various checks on lists: � atom(v) - returns ‘true’ if ‘v’ is an atomic element; ‘false’

otherwise � listp(L) - returns ‘true’ if ‘L’ is a list; ‘false’ otherwise � member(v, L) - returns ‘true’ if ‘v’ is a member of list L;

‘false’ otherwise � some(p, L) - returns ‘true’ if predicate ‘p’ is true for at least

one member of list L; ‘false’ otherwise � every(p, L) - returns ‘true’ if predicate ‘p’ is true for all

members of list L; ‘false’ otherwise

$ maxima -q

(%i1) atom(5);

(%o1) true

(%i2) atom([5]);

(%o2) false

(%i3) listp(x);

(%o3) false

(%i4) listp([x]);

(%o4) true

(%i5) listp([x, 5]);

(%o5) true

(%i6) member(x, [a, b, c]);

(%o6) false

(%i7) member(x, [a, x, c]);

(%o7) true

(%i8) some(primep, [1, 4, 9]);

(%o8) false

(%i9) some(primep, [1, 2, 4, 9]);

(%o9) true

(%i10) every(integerp, [1, 2, 4, 9]);

(%o10) true

(%i11) every(integerp, [1, 2, 4, x]);

(%o11) false

(%i12) quit();

List recreationsNext is a set of functions operating on list(s) to create and return new lists: � cons(v, L) - returns a list with ‘v’, followed by members of L � endcons(v, L) - returns a list with members of L followed by ‘v’ � rest(L, n) - returns a list with members of L, except the first ‘n’

members (if ‘n’ is non-negative), otherwise except the last ‘-n’ members. ‘n’ is optional, in which case, it is taken as 1

� join(L1, L2) - returns a list with members of L1 and L2 interspersed

� delete(v, L, n) - returns a list like L but with the first ‘n’ occurrences of ‘v’ deleted from it. ‘n’ is optional, in which case all occurrences of ‘v’ are deleted

� append(L1, ..., Ln) - returns a list with members of L1, ..., Ln, one after the other

� unique(L) - returns a list obtained by removing the duplicate

For U & MeLet’s Try

www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 89

Page 90: Open Source to You - August 2014

member of the list L.

$ maxima -q

(%i1) L: create_list(i * x, x, [a, b, c], i, [1, 2, 3, 4]);

(%o1) [a, 2 a, 3 a, 4 a, b, 2 b, 3 b, 4 b, c, 2 c, 3 c, 4 c]

(%i2) first(L);

(%o2) a

(%i3) seventh(L);

(%o3) 3 b

(%i4) last(L);

(%o4) 4 c

(%i5) third(L); last(L);

(%o5) 3 a

(%o6) 4 c

(%i7) L;

(%o7) [a, 2 a, 3 a, 4 a, b, 2 b, 3 b, 4 b, c, 2 c, 3 c, 4 c]

(%i8) quit();

Again, note that the list L is still not modified. However, we may need to modify the existing lists, and none of the above functions will do that. It could be achieved by assigning the return values of the various list recreation functions back to the original list. However, there are a few functions, which do modify the list right away.

List manipulationsThe following are the two list manipulating functions provided by Maxima: � push(v, L) - inserts ‘v’ at the beginning of the list L � pop(L) - removes and returns the first element from list L

L must be a symbol bound to a list, not the list itself, in both the above functions, for them to modify it. Also, these functionalities are not available by default, so we need to load the basic Maxima file. Check out the demonstration below.

We may display L after doing these operations, or even check the length of L to verify the actual modification of L. In case we need to preserve a copy of the list, the function copylist() can be used.

$ maxima -q

(%i1) L: makelist(2 * x, x, 1, 10);

(%o1) [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]

(%i2) push(0, L); /* This doesn’t work */

(%o2) push(0, [2, 4, 6, 8, 10, 12, 14, 16, 18, 20])

(%i3) pop(L); /* Nor does this work */

(%o3) pop([2, 4, 6, 8, 10, 12, 14, 16, 18, 20])

(%i4) load(basic); /* Loading the basic Maxima file */

(%o4) /usr/share/maxima/5.24.0/share/macro/basic.mac

(%i5) push(0, L); /* Now, this works */

(%o5) [0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20]

(%i6) L;

(%o6) [0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20]

(%i7) pop(L); /* Even this works */

(%o7) 0

(%i8) L;

(%o8) [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]

(%i9) K: copylist(L);

(%o9) [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]

(%i10) length(L);

(%o10) 10

(%i11) pop(L);

(%o11) 2

(%i12) length(L);

(%o12) 9

(%i13) K;

(%o13) [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]

(%i14) L;

(%o14) [4, 6, 8, 10, 12, 14, 16, 18, 20]

(%i15) pop([1, 2, 3]); /* Actual list is not allowed */

arg must be a symbol [1, 2, 3]

#0: symbolcheck(x=[1,2,3])(basic.mac line 22)

#1: pop(l=[1,2,3])(basic.mac line 26)

-- an error. To debug this try: debugmode(true);

(%i16) quit();

Advanced list operationsAnd finally, here is a bonus of two sophisticated list operations: � sublist_indices(L, p) - returns the list indices for the

members of the list L, for which predicate ‘p’ is ‘true’. � assoc(k, L, d) - L must have all its members in the form

of x op y, where op is some binary operator. Then, assoc() searches for ‘k’ in the left operand of the members of L. If found, it returns the corresponding right operand, otherwise it returns‘d’; or it returns false, if ‘d’ is missing.

Check out the demonstration below for both the above operations

$ maxima -q

(%i1) sublist_indices([12, 23, 57, 37, 64, 67], primep);

(%o1) [2, 4, 6]

(%i2) sublist_indices([12, 23, 57, 37, 64, 67], evenp);

(%o2) [1, 5]

(%i3) sublist_indices([12, 23, 57, 37, 64, 67], oddp);

(%o3) [2, 3, 4, 6]

(%i4) sublist_indices([2 > 0, -2 > 0, 1 = 1, x = y], identity);

(%o4) [1, 3]

(%i5) assoc(2, [2^r, x+y, 2=4, 5/6]);

(%o5) r

(%i6) assoc(6, [2^r, x+y, 2=4, 5/6]);

(%o6) false

(%i7) assoc(6, [2^r, x+y, 2=4, 5/6], na);

(%o7) na

(%i8) quit();

By: Anil Kumar PugaliaThe author is a gold medallist from NIT, Warangal and IISc, Bengaluru. Mathematics and knowledge-sharing are two of his many passions. Learn more about him at http://sysplay.in. He can be reached at [email protected].

By: Anil Kumar Pugalia

For U & Me Let’s Try

90 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

Page 91: Open Source to You - August 2014

Replicant: A Truly Free Version of Android

Smartphones have evolved from being used just for communicating with others to offering a wide range of functions. The fusion between the Internet and

smartphones has made these devices very powerful and useful to us. Android had been a grand success in the smartphone business. It’s no exaggeration to say that more than 80 per cent of the smartphone market is now occupied by Android, which has become the preference of most mobile vendors today.

The reason is simple, Android is free and available to public.But there’s a catch. Have you ever wondered how well

Android respects ‘openness’ ? And how much Android respects your freedom? If you haven’t thought about it, please take a moment to do so. When you’re done, you will realise that Android is not completely open to everyone.

That’s why we’re going to explore Replicant –- a truly free version of Android.

Android and opennessLet’s talk about openness first. The problem with a closed source program is that you cannot feel safe with it. There have been many incidents, which suggest that people can easily be spied upon through closed source programs.

On the other hand, since open source code is open and available to everyone, one cannot plant a bug in an open source program because the bug can easily be found. Apart from that aspect, open source programs can be continually improved by people contributing to them—enhancing a feature and writing software patches, also there are many user communities that

will help you if you are stuck with a problem. When Android was first launched in 2007, Google also

announced the ‘Open Handset Alliance (OHA)’ to work with other mobile vendors to create an open source mobile operating system, which would allow anyone to work on it. This seemed to be a good deal for the mobile vendors, because Apple’s iPhone practically owned the smartphone market at that time. The mobile vendors needed another player, or ‘game changer’, in the smartphone market and they got Android.

When Google releases the Android source code to the public for free, it is called ‘stock Android’. This comprises only the very basic system. The mobile vendors take this stock Android and tailor it according to their device’s specifications—featuring unique visual aspects such as themes, graphics and so on.

OHA has many terms and conditions, so if you want to use Android in your devices, you have to play by Google’s rules. The following aspects are mandatory for each Android phone: � Google setup-wizard � Google phone-top search � Gmail apps � Google calendar � Google Talk � Google Hangouts � YouTube � Google maps for mobiles � Google StreetView � Google Play store � Google voice search

Replicant is a free and open source mobile operating system based on the Android platform. It aims at replacing proprietary Android apps and components with open source alternatives. It is security focused, as it blocks all known Android backdoors.

For U & MeOverview

www.OpenSourceForU.com | OPEN SOURCE FOR YOU | AUgUSt 2014 | 91

Page 92: Open Source to You - August 2014

The following list features devices supported by Replicant and their corresponding Replicant versions.• HTC Dream/HTC Magic: Replicant 2.2• Nexus One: Replicant 2.3• Nexus S: Replicant 4.2• Galaxy S: Replicant 4.2• Galaxy S2: Replicant 4.2• Galaxy Note: Replicant 4.2• Galaxy Nexus: Replicant 4.2• Galaxy Tab 2 7.0: Replicant 4.2• Galaxy Tab 2 10.1: Replicant 4.2• Galaxy S3: Replicant 4.2• Galaxy Note 2: Replicant4.2• GTA04: Replicant 2.3Separate installation instructions for these devices can be found on the Replicant website.

confidential data such as bank account numbers, passwords, etc, on it. It’s not an exaggeration to state that our smartphones contain more confidential data than any other secure vault in this world. In today’s world, the easiest way to track people’s whereabouts is via their phones. So you should realise that you are holding a powerful device in your hands, and you are responsible for keeping your data safe.

People use smartphones to stay organised, set reminders or keep notes about ideas. Some of the apps use centralised servers to store the data. What users do not realise is that you lose control of your data when you trust a centralised server that is owned by a corporation you don’t know. You are kept ignorant about how your data is being used and protected. If an attacker can compromise that centralised server, then your data could be at risk. To make things even more complicated, an attacker could erase all that precious data and you wouldn’t even know about it.

Most of the apps in the Google Play store are closed source. Some apps are malicious in nature, working against the interests of the user. Some apps keep tabs on you, or worse, they can steal the most confidential data from your device without your knowledge. Some apps act as tools for promoting non-free services or software by carrying ads. Several studies reveal that these apps track their users’ locations and store other background information about them.

You may think of this as paranoia, but the thing is that cyber criminals thrive on the ignorance of the public. It may be argued that most users do not have any illegal secrets in the phone, nor are they important people, so why should they worry about being monitored? Thinking along those lines resembles the man who ignores an empty gun at his door step. He may not use that gun, but is completely ignorant of the fact that someone else might use that gun and frame him for murder.

ReplicantDespite the facts that stack up against Android, it is almost impossible to underestimate its benefits. For a while, Linux was considered a ‘nerdy’ thing, used only by developers, hackers and others in research. Typically, those in the ‘normal’ user community did not know much about Linux. After the arrival of

These specifications are in Google’s ‘Mobile Application Distribution Agreement- (MADA)’ which was leaked in February 2014.

There are some exceptions in the market such as Amazon’s Kindle Fire, which is based on the Android OS but doesn’t feature the usual Google stuff and has Amazon’s App Store instead of Google Play.

For a while, we were all convinced that Android was free and open to everyone. It may seem so on the surface but under the hood, Android is not so open. We all know that, at its core, Android has a Linux kernel, which is released under the GNU Public License, but that’s only a part of Android. Many other components are licensed under the Apache licence, which allows the source code of Android to be distributed freely and not necessarily to be released to the public. Some mobile vendors make sure that their devices run their very own tailored Android version by preventing users from installing any other custom ROMs. A forcibly installed custom ROM in your Android will nullify the warranty of the device. So, most users are forced to keep the Android version shipped with the device.

Another frustrating aspect for Android users is with respect to the updates. In Android, updates are very complex, because there is no uniformity among the various devices running the Android OS. Even closed OSs support their updates—for example, Apple’s iOS 5 supports iPhone 4, 4s, iPad and iPad 2; and Microsoft allows its users to upgrade to Windows 7 from Windows XP without hassles. As you have probably noticed, only a handful of devices receive the new Android version. The rest of the users are forced to change their phones. Most users are alright with that, because today, the life expectancy of mobiles is a maximum of about two years. People who want to stay updated as much as possible, change their phones within a year. The reason behind this mess is that updates depend mostly on the hardware, the specs of which differ from vendor to vendor. Most vendors upgrade their hardware specs as soon as a new Android version hits the market. So the next time you try to install an app which doesn’t work well on your device, just remember, “It’s time to change your phone!”

Android and freedomOnline privacy is becoming a myth, since security threats pose a constant challenge. No matter how hard we work to make our systems secure, there’s always some kind of threat arising daily. That’s why systems administrators continually evaluate security and take the necessary steps to mitigate threats.

Not long ago, we came to know about PRISM –- an NSA (USA) spy program that can monitor anyone, anywhere in the world, at any time. Thanks to Edward Snowden, who leaked this news, we now realise how vulnerable we are online. Although some may think that worrying about this borders on being paranoid, there’s sufficient proof that all this is happening as you read this article. Many of us use smartphones for almost everything. We keep business contacts, personal details, and

For U & Me Overview

92 | AUgUSt 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com

Page 93: Open Source to You - August 2014

Android, everyone has the Linux kernel in their hands. Android acts as a gateway for Linux to reach all kinds of people. The FOSS community believes in Android, but since Android poses a lot of problems due to the closed nature of its source code, some people thought of creating a mobile operating system without relying on any closed or proprietary code or services. That’s how Replicant was born.

Most of Android’s non-free code deals with hardware such as the camera, GPS, RIL (Radio interface layer), etc. So, Replicant attempts to build a fully functional Android operating system that relies completely on free and open source code.

The project began in 2010—named after the fictional Replicant androids in the movie ‘Blade Runner’. Denis ‘GNUtoo’ Carikli and Paul Kocialkowski are the current lead developers for the Replicant.

In the beginning, they began by writing code for the HTC ‘Dream’ in order to make it a fully functional phone that did not rely on any non-free code. They made a little progress such as getting the audio to work with fully free and open source code, and after that they succeeded in making and receiving calls. You can find a video of Replicant working on the HTC Dream on YouTube.

The earlier versions of Replicant were based on AOSP (Android Open Source Project) but in order to support more devices, the base was changed to Cynogenmod—another custom ROM which is free but still has some proprietary drivers. The Replicant version 4.2 was released on January 22, 2014, which is based on Cynogenmod 10.1.

On January 3, 2014, the Replicant team released its full-libre Replicant SDK. You’ve probably noticed that the Android SDK is no longer open source software. When you try to download it, you will be presented with lengthy ‘terms and conditions’, clearly stating that you must agree to that license’s terms or you are not allowed to use that SDK.

Replicant is all about freedom. As you can see, the Replicant team is labelling it the truly free version of Android. The team didn’t focus much on open source, although the source code for Replicant is open to everyone. When it comes to freedom, from the users’ perspective, the word simply means that they are given complete control over their device, even though they might not know what to do with that control. The Replicant team isn’t making any compromises when it comes to the user’s freedom. Although there may be some trade-offs concerning freedom, the biggest challenge for the Replicant team is to write hardware drivers and firmware that can support various devices. This is a difficult task since one Android device may differ from another. It’s not surprising that they mainly differ in their hardware capabilities. That is why some apps that work well on one device may not necessarily work well on another. This problem could be solved if device manufacturers decide that the drivers and firmware should be given to the public, but we all know that’s not going to happen. That’s why there are some devices running on Replicant that still don’t have 3D graphics, GPS, camera access, etc, but as

mentioned earlier, people who value their freedom above all else, find Replicant very appealing.

The Replicant team is gradually making progress in adding support for more devices. For some devices, the conversion from closed source to open source becomes cumbersome, which is why these devices are rejected by the Replicant team.

F-DroidOne of the reasons for the grand success of Android is the wide range of apps that is readily available on the Google Play store for anyone to download.

For Replicant, you cannot use Google Play but you can use an alternative—F-Droid, which has only free and open source software.

The problem with Google Play is that many apps on it are closed source. So since we may not be able to look at their source code, there’s a great possibility of an app that could spy on you or worse, steal your data being installed on it. By installing apps from Google Play, users inadvertently promote non-free software. Some apps also track their users’ whereabouts.

F-Droid, on the other hand, makes sure all apps are built from their source code. When an application is submitted to F-Droid, it is in the form of source code. The F-Droid team builds it into a nice APK package from the source, so the user is assured that no other malicious code is added to that app since you can view the source code.

The F-Droid client app can be downloaded from the F-Droid website. This app is extremely handy for downloading and installing apps without hassle. You don’t need an account but can install various versions of apps provided there. You can choose the one that works best for you and also easily get automatic updates.

If you’re an Android user but want FOSS on your device, F-Droid is available to you. You have to allow your device to install apps from sources other than Google Play (which would be F-Droid). Using the single F-Droid client, you can easily browse through various sections of apps and easily remove the installed apps in your device or update your apps.

Using Replicant doesn’t grant your device complete protection, but it can make your device less vulnerable to threats. It can offer you real control over your device and you can enjoy true freedom. If your device doesn’t support Replicant, you can use Cynogenmod instead, which is officially prescribed as an alternative to Replicant.

As Benjamin Franklin put it, “Those who give up essential liberty to purchase a little temporary safety, deserve neither liberty nor safety.” It’s up to you to choose between liberty and temporary safety.

By: Anil Kumar PugaliaThe author has completed a B.E. in computer science. As he is deeply interested in Linux, he spends most of his leisure time exploring open source.

By: Magimai Prakash

For U & MeOverview

www.OpenSourceForU.com | OPEN SOURCE FOR YOU | AUgUSt 2014 | 93

Page 94: Open Source to You - August 2014

Firefox May Change the Mobile Market!TCL Communication’s smartphone brand, Alcatel One Touch, launched the Alcatel One Touch Fire smartphone ‘globally’ last year. Fire was the first ever phone to run the Firefox OS, an open source operating system created by Mozilla. According to many, this OS is in some ways on par with Android, if not better. Sadly, Fire has failed to see the light of day in India, because our smartphone market has embraced Android on such a large scale that other OSs find it hard to make an impact. In a candid chat, Piyush A Garg, project manager, APAC BU India, spoke to Saurabh Singh from Open Source For You about how the Firefox OS could be the next big thing and why Alcatel One Touch has not yet given up on it.

but there’s such a big hoo-ha about Android. Last year, it was a big thing. First, you have to create some space for the OS itself, and then create a buzz,” revealed Piyush A Garg, project manager, APAC BU India.

According to Garg, there’s still a basic lack of awareness regarding the Firefox OS in India. “Techies might be aware of what the Firefox OS is but the average end user may not. And ultimately, it is the end user who has to purchase the phone. We have to communicate the advantages of Mozilla Firefox

to the end user, create awareness and only then launch a product based on it,” he said.

Alcatel’s plans for Firefox-based smartphones

So the bottom line is, India will not see the Alcatel One Touch Fire any

time soon; or maybe not see it at all. “Sadly, yes. Fire is not coming to India at all. It’s not going to come to India because Fire was an 8.89 cm (3.5 inch) product. Instead, we might be coming up

with an 8.89-10.16 cm (3.5-4 inch) product. Initially, we were considering a 12.7-13.97 cm (5-5.5 inch) device.

However, we are looking to come up with a low-end phone and such

a device cannot come in the 12.7 cm (5 inch) segment. So, once the product is

launched with an 8.89—10.16 cm (3.5-4 inch) screen with the Firefox OS, we may launch a whole series of

Firefox OS-based devices,” said Garg.

The Firefox OS ecosystem needs a push in IndiaWith that said, it has taken a fairly long time for the company to realise that the Firefox OS could be a deal-breaker in an extensive market such as India. “Firefox OS may change the mobile game. However, it still needs to grow in India. Considering the fact

It was not very long ago (July 25, 2011, to be precise) that Andreas Gal, director of research at Mozilla Corporation, announced the ‘Boot to Gecko’ project (B2G) to build a

complete, standalone operating system for the open Web, which could provide a community-based alternative to commercially developed operating systems such as Apple’s iOS and Microsoft’s Windows Phone. Besides, the Linux-based operating system for smartphones and tablets (among others) also aimed to give Google’s Android, Jolla’s Sailfish OS as well as other community-based open source systems such as Ubuntu Touch, a run for their money (pun intended!). Although, on paper, the project boasts of tremendous potential, it has failed to garner the kind of response its developers had initially hoped for. The relatively few devices in a market that is flooded with the much-loved Android OS could be one possible reason. Companies like ZTE, Telefónica and GeeksPhone have taken the onus of launching Firefox OS-based devices; however, giants in the field have shied away from adopting it, until now.

Hong Kong’s Alcatel One Touch is one of the few companies that has bet on Firefox by launching the Alcatel One Touch Fire smartphone globally, last year. The Firefox OS 1.0-based Fire was primarily intended for emerging markets with the aim of ridding the world of feature phones. Sadly, the Indian market was left out when the first Firefox OS-based smartphone was tested—could Android dominance be the reason? “Alcatel Fire (Alcatel 4012) was launched globally last year. We tried everything,

For U & Me Open Strategy

94 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

Page 95: Open Source to You - August 2014

that Android has such a huge base in India, we are waiting for the right time to launch the Firefox-based smartphones here,” he said. But is the Firefox OS really a ‘deal-breaker’ for customers? “The Firefox OS can be at par with Android. The major advantages of Mozilla Firefox are primarily the memory factor and the space that it takes—the entire OS as well as the applications. It’s not basically an API kind of OS; it’s an installation directly coming from HTML. That’s a major advantage. Also, apps for the OS are built using HTML5, which means that, in theory, they run on the Web and on your phone or tablet. What made Android jump from Jelly Bean to KitKat (which requires low memory) is the fact that the end user is looking at a low memory OS. Mozilla Firefox is also easy to use. I won’t say ‘better’ or ‘any less’, but at par with Android,” said Garg, evidently confident of the platform.

To take things forward, vis-à-vis the platform, Alcatel One Touch is also planning to come up with an exclusive App Store, with its own set of apps. “We have already planned our ‘play store’, and tied up with a number of developers to build our own apps. I cannot comment on the timeline of the app store but it’s in the pipeline. We currently have as many as five R&D centres in China. We are not yet in India, although we are looking to engage developers here as well. We’re already in the discussion phase on that front,” said Garg. So, what’s the company’s strategy to engage developers in particular? “We invite developers to come up and give in their

ideas. Then either we accept them, which means we buy the idea, or we work out some kind of association with which developers get revenue out of the collaboration. In China, more than 100,000 developers are engaged in building apps for Alcatel. India is on our to-do list for building a community of app dvelopers. It’s currently at an ‘amateur stage’; however, we expect things to happen eventually,” he said.

Although there’s no definite time period for the launch of Alcatel’s One Touch Firefox OS-based smartphone in India (Garg is confident it will be here by the end of 2014, followed by a whole series, depending upon how it’s received), one thing that is certain is that the device will be very affordable. Cutting costs while developing such low-end devices is certainly a challenge for companies, since customers do tend to choose ‘value for money’ when making their purchases. “We are not allowed to do any ‘trimming’ with respect to the hardware quality—since we are FCC-compliant, we cannot compromise on that,” said Garg.

So what do companies like Alcatel One Touch actually do to cut manufacturing costs? “We look at larger quantities that we can sell at a low cost, using competitive chipsets that are offered at a low price. On the hardware side, we may not give lamination in a low-cost phone, or we may not offer Corning glass or an IPS, and instead give a TFT, for instance,” Garg added.

Month theMe Featured List buyers’ guide

March 2014 Network monitoring Security -------------------

April 2014 Android Special Anti Virus Wifi Hotspot Devices

May 2014 Backup and Data Storage Certification External Storage

June 2014 Open Source on Windows Mobile Apps UTMs fo SMEs

July 2014 Firewall and Network security Web Hosting Solutions Providers MFD Printers for SMEs

August 2014 Kernel Development Big Data solution Providers SSDs for Servers

September 2014 Open Source for Start-ups Cloud Android Devices

October 2014 Mobile App Development Training on Programming Languages Projectors

November 2014 Cloud Special Virtualisation Solutions Providers Network Switches and Routers

December 2014 Web Development Leading Ecommerce Sites AV Conferencing

January 2015 Programming Languages IT Consultancy Service Providers Laser Printers for SMEs

February 2015 Top 10 of Everything on Open Source Storage Solutions Providers Wireless Routers

osFy Magazine attractions during 2014-15

For U & MeOpen Strategy

www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 95

Page 96: Open Source to You - August 2014
Page 97: Open Source to You - August 2014

HP’s latest mantra is the ‘new style of IT’. Conventional servers and data storage systems do not work for the company and its style of IT any longer. This is about the evolution of converged systems that have taken over the traditional forms of IT. The company is taking its mantra forward in every possible way.

HP has recently launched the HP Apollo family of high-performance computing (HPC) systems. The company claims that HP Apollo is capable of delivering up to four times the performance of standard rack servers while using less space and energy. The new offerings reset data centre expectations by combining a modular design with improvised power distribution and cooling techniques. Apart from this, the company claims that HP Apollo has a higher density at a lower total cost of ownership. The air-cooled HP Apollo 6000 System maximises performance efficiency and makes HPC capabilities accessible to a wide range of enterprise customers. It is a supercomputer that combines high levels of processing power with a water-cooling design for ultra-low energy usage.

These servers add to the fast pace of changes going on in the IT space today. Vikram K from HP shares his deep insight into how IT is changing. Read on...

QSince you have just launched your latest servers here, what is your take on the Indian server market?

From a server standpoint, we are very excited, because virtually every month and a half, we’ve been offering a new enhancement or releasing a new product, which is different from the previous one. So the question is - how are these different? Well, we have basically gone back and looked at things through the eyes of the customer to understand what they expect from IT. They want to get away from conventional IT and move to an improvised level of IT. So we see three broad areas: admin controlled IT; user controlled IT, which is more like the cloud and is workload specific; and then there is application-specific ‘compute and serve’ IT. These are the three distinct combinations. Within these three areas, we have had product launches, one after the other. The first one, of course, is an area where we dominate. So, we decided to extend the lead and that is how the innovations continue to happen.

QWhat do you mean by ‘new style of IT’?It is the time for converged systems, which are opening up

an altogether new dimension of IT. With converged systems, you get three different systems comprising the compute part, and the storage and the networking parts, to work together. A variety of IT heads are opting for this primarily because they want to either centralise IT, consolidate or improve the overall efficiency and performance. When they do that, they need to have better converged systems management. So we have combined our view of converged systems and made them workload specific. These days we have workload specific systems. For example, with something like a column-

oriented database like Vertica, we have a converged system for virtualisation. Some time back, servers were a sprawl, but these days, virtual machines are a big sprawl.

QConverged systems have been around for about 18 months now. Can you throw some light on customers’

experiences with these systems?Yes, converged systems have been around for a while now and we have incrementally improved on their management. What we have today as a CSM for virtualisation or CSM for Hanna, wasn’t there a year back. The journey has been good and plenty of enterprises have expressed interest in such evolved IT. With respect to the adoption rate, the IT/ITES segment has been the first large adopter of converged systems, primarily because it has a huge issue about just doing the systems integration of ‘X’ computers that compute ‘Y’ storage while somebody else takes care of the networks . Now, it is the time for systems that come integrated with all three elements, and the best part is that it is very workload specific.

We see a lot of converged systems being adopted in the area of manufacturing also. People who had deployed SAP earlier have some issues. One of them is that it is multi-tier, i.e., it has multiple application servers and multiple instances in the database. So when they want to run analytics, it gets extremely slow because a lot of tools are used to extract information. We came up with a solution, which customers across the manufacturing and IT/ITES segments are now discovering. That is why we see a very good adoption of converged systems across segments.

QWe hear a lot about software defined data centres (SDCs). Many players like VMware are investing a lot in

this domain. How do you think SDCs are evolving in India?The software-defined data centre really does have the potential to transform the entire IT paradigm and the infrastructure and application landscape. We have recently launched new products and services in the networking, high-performance computing, storage and converged infrastructure areas. They will allow enterprises to build software-defined data centres and hybrid cloud infrastructures. Big data, mobility, security and cloud computing are forcing organisations to rethink their approach to technology, causing them to invest heavily in IT infrastructure. So, when we are talking about software defined data centres, we are talking about a scenario in which it can be a heterogeneous

It is the time for converged systems, which are opening up an altogether new dimension of IT. With converged systems, you get three different systems comprising the compute part, and the storage and the networking parts, to work together.

For U & MeInterview

www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 97

Page 98: Open Source to You - August 2014

setup of hypervisors, infrastructure, et al, which will help you migrate from one to another, seamlessly.

QSo, software defined data centres could replace traditional data centres in the future? Therefore, can we

consider them a part of new-age IT?Well, I don’t believe that is so. We have been living with old TP for about 30-35 years. As the cloud, big data and mobility pick up even more, and are used in the context of analytics, you will still have two contexts residing together, which is old TP and old AP. Then you would have more converged systems and will talk about converged system management. That is exactly our version of how we want to define software defined data centres.

QWe talk a lot about integrated and converged systems. It sounds like a great idea as it would involve all the solutions

coming in from one vendor. But does that not lead to some kind of vendor lock-in?No it doesn’t, primarily because these are workload specific. So, one would not implement a converged system just for the sake of it. As I mentioned, it has to be workload specific. So, if you want to virtualise, then you would do one type of converged system or integrated system. If you want to do Hanna, that is an entirely different converged system. What helps the customers is that it breaks down the cycle of project deployment and hence, frees up a lot of resources that would otherwise be consumed for mere active deployment or transitioning from one context to another.

QSo, are SMBs ready to jump onto the integrated systems’ bandwagon?

Yes, there are quite a few SMBs in India that are very positive about integrated systems. Customers, irrespective of the segment that they belong to, look at it from the angle of how the business functions, and what kind of specificity they want to get to. I wouldn’t be particularly concerned about the segment, but I would look at it from the context of what workload specificity a customer wants.

QWhat are the issues that you have seen IT heads face while adopting converged IT systems?

Fortunately, we have not heard of many challenges that the IT heads have faced while adopting converged IT solutions. In fact, it has eased things for them, primarily because they have been told in advance about what they are getting into. They are no more dealing with three separate items. They are getting into one whole thing, which is getting deployed and what they used to take months to achieve, is done in two or three days. This is because we run the app maps prior to the actual sale and tell them what exactly will reach them, how it will run and what kind of performance it will deliver. The major challenges are related to the fact that they are on the verge of a transition (from the business perspective), and they see any transition as being slightly risky. Hence, they thoroughly check on the ROI and are generally very cautious.

For U & Me Interview

98 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | PB

Page 99: Open Source to You - August 2014

Swap Space for Linux: How Much is Really Needed?

swap space should be double the amount of physical memory (RAM) available, i.e., if we have 16 GB of RAM, then we ought to allot 32 GB to the swap space. But this is not very effective these days.

Actually, the amount of swap space depends on the kind of application you run and the kind of user you are. If you are a hacker, you need to follow the old rule. If you frequently use hibernation, then you would need more swap space because during hibernation, the kernel transfers all the files from the memory to the swap area.

So how can the swap space improve the performance of Linux? Sometimes, RAM is used as a disk cache rather than to store program memory. It is, therefore, better to swap out a program that is inactive at that moment and, instead, keep the often-used files in cache. Responsiveness is improved by swapping pages out when the system is idle, rather than when the memory is full.

Even though we know that swapping has many advantages, it does not necessarily improve the performance of Linux on your system, always. Swapping can even make your system slow if the right quantity of it is not allotted. There are certain basic concepts behind this also. Compared to memory, disks are very slow. Memory can be accessed in nanoseconds, while disks are accessed by the processor in milliseconds. Accessing the disk can be many times slower than accessing the physical memory. Hence, the more the swapping, the slower the system. We should know the amount of space that we need to allot for swapping. The

Linux divides its physical memory (RAM) into chunks called pages. Swapping is the process whereby pages get transferred to a preconfigured hard disk area. The quantum of swap space is determined during the Linux installation process. This article is all about swap space, and explains the term in detail so that newbies don’t find it a problem choosing the right amount of it when installing Linux.

The virtual memory of any system is a combination of two things - physical memory, which can be accessed, i.e., RAM, and swap space. The latter

holds the inactive pages that are not accessed by any running application. Swap space is used when the RAM has insufficient space for active processes, but it has certain spaces which are inactive at that point in time. These inactive pages are temporarily transferred to the swap space, which frees up space in the RAM for active processes. Hence, the swap space acts as temporary storage that is required if there is insufficient space in your RAM for active processes. But as soon as the application is closed, the files that were temporarily stored in the swap space are transferred back to the RAM. The access time for swap space is less. In short, swapping is required for two reasons: � When more memory than is available in physical memory

(RAM) is required by the system, the kernel swaps less-used pages and gives the system enough memory to run the application smoothly.

� Certain pages are required by the application only at the time of initialisation and never again. Such files are transferred to the swap space as soon as the application accesses these pages.After understanding the basic concept of swap space,

one should know what amount of space needs to be actually allotted to the swap space so that the performance of Linux actually improves. An earlier rule stated that the amount of

For U & MeOverview

PB | july 2014 | OPEN SOuRCE FOR yOu | www.OpenSourceForu.com www.OpenSourceForu.com | OPEN SOuRCE FOR yOu | july 2014 | 99

Page 100: Open Source to You - August 2014

For U & Me Overview

following rules can effectively help to improve Linux’s performance on your system.

For normal servers: � Swap space should be equal to RAM size if RAM size is

less than 2 GB. � Swap space should be equal to 2 GB if RAM size is

greater than 2 GB.For heavy duty servers with fast storage requirements:

� Swap space should be equal to RAM size if RAM size is less than 8 GB.

� Swap space should be equal to 0.5 times the size of the RAM if the RAM size is greater than 8 GB.If you have already installed Linux, you can check

your swap space by using the following command in the Linux terminal:

cat /proc/swaps

Swappiness and how to change itSwappiness is a parameter that controls the tendency of the kernel to transfer the processes from physical memory to ‘swap space’. It has a value between 0 to 100 and in Ubuntu, it has a default value of 60. To check the swappiness value, use the following command:

cat /proc/sys/vm/swappiness

A temporary change (lost at reboot) in a swappiness value of 10, for example, can be done with the following command:

sudosysctlvm.swappiness=10

For a permanent change, edit the configuration file as follows:

gksudogedit /etc/sysctl.conf

If the swappiness value is 0, then the kernel restricts the swapping process; and if the value is 100, the kernel swaps very aggressively.

So, while Linux as an operating system has great powers, you should know how to use those powers effectively so that you can improve the performance of your system.

By: Roopak T JThe author is an open source contributor and enthusiast. He has contributed to a couple of open source organisations including Mediawiki and LibreOffice. He is currently in his second year at Amrita University (B. Tech). You can contact him at [email protected]

www.electronicsforu.com www.ffymag.comwww.OpenSourceForu.com www.efyindia.comwww.eb.efyindia.comTHE COMPLETE MAGAZINE

ON OPEN SOURCE

100 | july 2014 | OPEN SOuRCE FOR yOu | www.OpenSourceForu.com www.OpenSourceForu.com | OPEN SOuRCE FOR yOu | july 2014 | PB

Page 101: Open Source to You - August 2014
Page 102: Open Source to You - August 2014

TIPSTRICKS&

Booting an ISO directly from the hard drive using GRUB 2

We often find ourselves in a situation in which we have an ISO image of Ubuntu on our hard disk and we need to test it by first running it. Try out this method for using the ISO image.

Create a GRUB menu entry by editing the /etc/grub.d/40_custom file. Add the text given below just after the existing text in the file:

#gksu gedit /etc/grub.d/40_custom

Add the menu entry:

menuentry “Ubuntu 12.04.2 ISO”

{

set isofile=”/home/<username>/Downloads/ubuntu-12.04.2-

desktop-amd64.iso” #path of isofile

loopback loop (X,Y)$isofile

linux (loop)/casper/vmlinuz.efi boot=casper iso-scan/

filename=$isofile noprompt noeject

initrd (loop)/casper/initrd.lz

}

isofile variable is not required but simplifies the creation of multiple Ubuntu ISO menu entries.

The loopback line must reflect the actual location of the ISO file. In the example, the ISO file is stored in the user’s Downloads folder. X is the drive number, starting with 0; Y is the partition number, starting with 1. sda5 would be designated as (hd0,5) and sdb1 would be (hd1,1). Do not use (X,Y) in the menu entry but use something like (hd0,5). Thus, it all depends on your system’s configuration.

Save the file and update the GRUB 2 menu:

#sudo update-grub

Now reboot the system. The new menu entry will be added in the Grub boot option.

—Kiran P S,[email protected]

Playing around with arguments While writing shell scripts, we often need to use

different arguments passed along with the command. Here is a simple tip to display the argument of the last command.

Use ‘!!:n’ to select the nth argument of the last command, and ‘!$’ for the last argument.

dev@home$ echo a b c d

a b c d

dev@home$ echo !$

echo d

d

dev@home$ echo a b c d

a b c d

dev@home$ echo !!:3

echo c

c

—Shivam Kotwalia, [email protected]

Retrieving disk information from the command line

Want to know details of your hard disk even without physically touching it? Here are a few commands that will do the trick. I will use /dev/sda as my disk device, for which I want the details.

smartctl -i /dev/sda

smartctl is a command line utility designed to perform

102 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

Page 103: Open Source to You - August 2014

SMART (Self-Monitoring, Analysis and Reporting Technology) tasks such as printing the SMART self-test and error logs, enabling and disabling SMART automatic testing, and initiating device self-tests. When the command is used with the ‘ –i ’ switch, it gives information about the disk.

The output of the above command will show the model family, device model, serial number, firmware version, user capacity, etc, of the hard disk (sda).

You can also use the hdparm command:

hdparm -I /dev/sda

hdparm can give much more information than smartctl.

—Munish Kumar,[email protected]

Writing an ISO image file to a CD-ROM from the command line

We usually download ISO images of popular Linux distros for installation or as live media, but end up using a GUI CD burning tool to create a bootable CD or DVD ROM. But, if you’re feeling a bit geeky, you could try doing so from the command line too:

# cdrecord -v speed=0 driveopts=burnfree -eject dev=1,0,0

<src_iso_file>

speed=0 instructs the program to write the disk at the lowest possible drive speed. But, if you are in a hurry, you can try speed=1 or speed=2. Keep in mind that these are relative speeds.

The -eject switch instructs the program to eject the disk after the operation is complete.

Now, the most important part to specify is the device’s ID. It is absolutely important that you specify the device ID of your CD ROM drive correctly or you may end up writing the ISO to some other place on the disk and corrupting your entire hard disk. To find out the device ID of your CD ROM drive, just run the following command prior to running the first command:

#cdrecord -scanbus

Your CD ROM’s device ID should look something like what’s shown below:

1,0,0

Also, note that you cannot create a bootable DVD disk using this command. But, do not be disheartened—there is another simpler command to burn a bootable DVD, which is:

# growisofs -dvd-compat -speed=0 -Z /dev/dvd=myfile.iso

Here, /dev/dvd is the device file that represents your DVD ROM. It is quite likely to be the same on your system as well.

Do not use growisofs to burn a CD ROM. The beauty of Linux

is that a single command does a single operation and does it well.

—Pankaj Rane,[email protected]

Downloading/converting HTML pages to PDFwkhtmltopdf is a software package that converts

HTML pages to PDF. If this is not installed on your system, use the following command to do so:

$sudo apt-get install wkhtmltopdf

After installing, you can run the command using the following syntax:

$wkhtmltopdf URL[oftheHTMLfile] NAME[of the PDF file].pdf

For example, by using:

$wkhtmltopdf opensourceforu.com OSFY.pdf

…the OSFY.PDF will be downloaded to the current working directory.

You can read the documentation to know more about this.

—Manu Prasad,[email protected]

Going invisible on the terminalDid you ever think that you could type commands that

would be invisible on your system but still would execute, provided you typed them correctly? This can easily be done by changing the terminal settings using the following command:

stty -echo

To restore the visibility of your commands, just type the following command:

stty echo

Note: Only the ‘minus’ sign has been removed.

—Sumit Agarwal,[email protected]

Share Your Linux Recipes!The joy of using Linux is in finding ways to get around problems—take them head on, defeat them! We invite you to share your tips and tricks with us for publication in OSFY so that they can reach a wider audience. Your tips could be related to administration, programming, troubleshooting or general tweaking. Submit them at www.linuxforu.com. The sender of each published tip will get a T-shirt.

www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 103

Page 104: Open Source to You - August 2014

The Mozilla Location Service: Addressing Privacy ConcernsDubbed a research project, Mozilla Location Service is the crowd-sourced mapping of wireless networks (Wi-Fi access points, cell phone towers, etc) around the world. This information is commonly used by mobile devices and computers to ascertain their location when GPS services are not available. The entry of Mozilla into this field is expected to be a game changer. So get to know more about Mozilla’s MozStumbler mobile app as well as Ichnaea.

The Mozilla mission statement expresses a desire to promote openness, innovation and opportunity on the Web. And Mozilla is trying to comply with this

pretty seriously.Firefox, Thunderbird, Firefox OS… the list of

Mozilla’s open source products is growing. Yet there are several areas in which tech giants like Google, Nokia and Apple are dominant and the mobile ecosystem is one of them. Mozilla is now trying to break into this space. After Firefox OS, the foundation now offers a new service for mobile users.

There are several services that a user might not even be aware of while using a cell phone. The network-based location service is one of the most used services by cell phone owners to determine their location if the GPS service is not available. Several companies currently offer this service but there are major privacy concerns associated with it. It is no secret that advertising companies track a user’s location history and offer ads or services based on it.

Till now, there was no transparent option among these services but Mozilla has come to our rescue, to prevent tech giants sniffing out our locations. As stated on

For U & Me Overview

104 | August 2014 | OPEN sOuRCE FOR YOu | www.OpensourceForu.com

Page 105: Open Source to You - August 2014

Mozilla’s location service website, “The Mozilla Location Service is a research project to investigate crowd-sourced mapping of wireless networks (Wi-Fi access points, cell towers, etc) around the world. Mobile devices and desktop computers commonly use this information to figure out their location when GPS satellites are not accessible.”

In the same statement, Mozilla acknowledges the presence of and the challenges presented by the other services, saying, “There are few high-quality sources for this kind of geolocation data currently open to the public. The Mozilla Location Service aims to address this issue by providing an open service to provide location data.”

This service provides geolocation lookups based on publicly observable cell tower and Wi-Fi access point information. Mozilla has come out with an Android app to collect publicly observable cell towers and Wi-Fi data; it’s called MozStumbler.

This app scans and uploads information of cell towers and Wi-Fi access points to Mozilla servers. The latest stable version of this app is ver 0.20.5 which is ready for download. MozStumbler provides the option to upload this scanned data over a Wi-Fi or cellular network. But you don’t need to be online while scanning; you can upload this data afterwards.

Note: 1. This app is not available on Google Play store but you can download it from https://github.com/MozStumbler/releases/2. The Firefox OS version of this app is on its way too. You can stay abreast of what’s happening with the Firefox OS app at http://github.com/FxStumbler/

Figure 2: MozStumbler optionsFigure 1: The MozStumbler app Figure 3: MozStumbler settings

By: Vinit Wankhede The author is a fan of free and open source software. He is currently contributing to the translation of the MozStumbler app for Mozilla location services.

You can optionally give your username in this app to track your contributions. Mozilla has also created a leader board to let users track and rank their contributions, apart from more detailed statistics that are available on this website. No user identifiable information is collected through this app.

Mozilla is not only collecting the data but also providing users with a publicly accessible API. It has code named the API ‘Ichnaea’, which means ‘the tracker’. This API can be accessed to submit data, search data or search your location. As the data collection is still in progress, it is not recommended to use this service for commercial applications, but you can try it out on your own just for fun.

Note: Mozilla Ichnaea can be accessed at https://mozilla-ichnaea.readthedocs.org

The MozStumbler app provides an option for geofencing, which means you can pause the scanning within a one km radius of the desired location. This deals with user concerns over collecting behavioural commute data such as Home, Work and travelling habits.

In short, Mozilla is trying to provide a high quality location service to the general public at no cost! Recently, Mozilla India held a competition ‘Mozilla Geolocation Pilot Project India’, which encouraged more and more users to scan their area. To contribute to this project, you can fork the repository on github or just install the app; you will be welcomed aboard.

For U & MeOverview

www.OpensourceForu.com | OPEN sOuRCE FOR YOu | August 2014 | 105

Page 106: Open Source to You - August 2014
Page 107: Open Source to You - August 2014
Page 108: Open Source to You - August 2014