cern computer newsletter - institute of...

13
1 CERN Computer Newsletter • November–December 2005 Editorial DES homes in on users’ needs 1 Announcements & news Computing featured in this month’s CERN COURIER 2 AppleTalk routing is discontinued 2 Department reorganization emphasizes physics support 3 Account closure tightens security 3 Registration gets an electronic fax 3 CHEP06 scheduled for Mumbai, India, on 13–17 February 2006 4 CRA application represents the future of account handling 4 Technical briefs Java Web hosting comes to CERN 5 Projects openlab II gets ready to build on the success of openlab I 6 Conference report CSC2005 is a record breaker 7 LCG news ARDA prototypes physics analysis 8 Barcelona’s Port d’Informació Científica: a centre of excellence for scientific-data processing 9 Physics computing CERN explains its Linux strategy: SLC4 or SLC5? 10 Cluster management helps ATLAS tests run smoothly 11 Information corner Keep your software updated and stay ahead of viruses 12 Questions and answers from the Helpdesk 12 IT services e-mails do not contain attachments 13 Service Status Board gets dynamic 13 News from the IT Bookshop 13 Calendar 13 Editors Nicole Crémel and Hannelore Hämmerle, CERN IT Department, 1211 Geneva 23, Switzerland. E-mail: [email protected]. Fax: +41 (22) 7668500. Web: cerncourier.com (link CNL). Advisory board Wolfgang von Rüden (head of IT Department), François Grey (IT Communication team leader), Christine Sutton (CERN Courier editor), Tim Smith (group leader, User and Document Services). Produced for CERN by Institute of Physics Publishing Dirac House, Temple Back, Bristol BS1 6BE, UK. Tel: +44 (0)117 929 7481. E-mail: [email protected]. Fax: +44 (0)117 920 0733. Web: iop.org. Published by CERN IT Department ©2005 CERN The contents of this newsletter do not necessarily represent the views of CERN management. Contents CERN COMPUTER NEWSLETTER Volume XL, issue 5 November–December 2005 In 2004, the Database and Engineering Services (DES) group was created, gathering activities that had previously been dispersed over three groups. CNL spoke to DES group leader Mats Moller about the synergies this has achieved, and the new technologies and services that the group is deploying. The DES group covers three main areas: Database Services, Engineering Services and Systems Services. What important trends do you see in these services? I would say that the most important development has been for these different services to focus more intensely on user needs. It is easy to get tied up solving technical problems – software is never perfect – and lose sight of the end-user. By comparing their approaches to user support, the different services in the group can learn from each other. As a result of this user focus, we are extending monitoring of services to try to detect problems before the end-users see them – and take action to rectify them. An example is monitoring file systems to detect when they begin to overflow, which can block applications. It’s important to realize that these services cannot stop outside a scheduled maintenance window. A failure can immediately affect different parts of CERN, like administration or the SM18 magnet test facility. There is also a move to commodity computing going on in the group. Not to the same extent as for the physics computing, perhaps, but it is happening. Solaris boxes for the database services are being gradually replaced by Linux clusters. Solaris support is being reduced and SUNDEV was shut down this summer. Solaris may be phased out entirely, although this would be over several years. On the engineering side, we are switching to Windows, as on PCs it is now the most popular platform in industry for engineering software. Another Grid-inspired development is the use of hardware load balancers. These scale well with cluster size, and make it easier to add nodes in a cluster, or perform upgrades and maintenance, without having to shut down a service. Commodity clusters can also be easily integrated with central alarm services, such as LEMON, to ensure faster and cheaper maintenance. These sorts of changes are possible through close collaboration with other groups in IT, such as the Communication Systems (CS) DES homes in on users’ needs Mats Moller talked to CNL about his role as DES group leader.

Upload: phungthuan

Post on 17-Apr-2018

216 views

Category:

Documents


3 download

TRANSCRIPT

1CERN Computer Newsletter • November–December 2005

EditorialDES homes in on users’ needs 1Announcements & newsComputing featured in this month’s CERN COURIER 2AppleTalk routing is discontinued 2Department reorganization emphasizes physics support 3Account closure tightens security 3Registration gets an electronic fax 3CHEP06 scheduled for Mumbai, India, on 13–17 February 2006 4CRA application represents the future of account handling 4Technical briefsJava Web hosting comes to CERN 5Projectsopenlab II gets ready to build on the success of openlab I 6Conference reportCSC2005 is a record breaker 7LCG newsARDA prototypes physics analysis 8Barcelona’s Port d’Informació Científica: a centre ofexcellence for scientific-data processing 9Physics computingCERN explains its Linux strategy: SLC4 or SLC5? 10Cluster management helps ATLAS tests run smoothly 11Information cornerKeep your software updated and stay ahead of viruses 12Questions and answers from the Helpdesk 12IT services e-mails do not contain attachments 13Service Status Board gets dynamic 13News from the IT Bookshop 13Calendar 13

Editors Nicole Crémel and Hannelore Hämmerle, CERN IT Department,1211 Geneva 23, Switzerland. E-mail: [email protected]. Fax: +41 (22) 7668500.Web: cerncourier.com (link CNL).

Advisory board Wolfgang von Rüden (head of IT Department), François Grey (ITCommunication team leader), Christine Sutton (CERN Courier editor), Tim Smith(group leader, User and Document Services).

Produced for CERN by Institute of Physics PublishingDirac House, Temple Back, Bristol BS1 6BE, UK. Tel: +44 (0)117 929 7481. E-mail: [email protected]. Fax: +44 (0)117 920 0733. Web: iop.org.

Published by CERN IT Department

©2005 CERN

The contents of this newsletterdo not necessarily represent theviews of CERN management.

Contents

CERN COMPUTERNEWSLETTER

Volume XL, issue 5 November–December 2005

In 2004, the Database andEngineering Services (DES) groupwas created, gathering activitiesthat had previously beendispersed over three groups. CNL

spoke to DES group leader MatsMoller about the synergies thishas achieved, and the newtechnologies and services thatthe group is deploying.

The DES group covers three mainareas: Database Services,Engineering Services and SystemsServices. What important trends doyou see in these services?I would say that the mostimportant development hasbeen for these different servicesto focus more intensely on userneeds. It is easy to get tied upsolving technical problems –software is never perfect – andlose sight of the end-user. Bycomparing their approaches touser support, the differentservices in the group can learnfrom each other.

As a result of this user focus,we are extending monitoring ofservices to try to detect problemsbefore the end-users see them –and take action to rectify them.An example is monitoring filesystems to detect when theybegin to overflow, which canblock applications.

It’s important to realize thatthese services cannot stopoutside a scheduledmaintenance window. A failurecan immediately affect differentparts of CERN, likeadministration or the SM18magnet test facility.

There is also a move tocommodity computing going onin the group. Not to the sameextent as for the physicscomputing, perhaps, but it ishappening. Solaris boxes for the

database services are beinggradually replaced by Linuxclusters. Solaris support is beingreduced and SUNDEV was shutdown this summer. Solaris maybe phased out entirely, althoughthis would be over several years.On the engineering side, we areswitching to Windows, as on PCsit is now the most popularplatform in industry forengineering software.

Another Grid-inspireddevelopment is the use ofhardware load balancers. Thesescale well with cluster size, andmake it easier to add nodes in acluster, or perform upgrades andmaintenance, without having toshut down a service. Commodityclusters can also be easilyintegrated with central alarmservices, such as LEMON, toensure faster and cheapermaintenance. These sorts ofchanges are possible throughclose collaboration with othergroups in IT, such as theCommunication Systems (CS)

DES homes inon users’ needs

Mats Moller talked to CNL about

his role as DES group leader.

Announcements & news

2 CERN Computer Newsletter • November–December 2005

group for the switching behindthe load balancer, and the FabricInfrastructure and Operations(FIO) group for the systemadministration of Linux clusters.

It is often said that the problemwith supporting general andadministrative services is that theuser only notices them when theyfail. Could you give somequantitative feel to CNL readers ofthe scale of the activities that yourgroup supports?Take the Printer Services, forexample. There are1300 network printers,1700 queues and 15 000 jobseach day, which makes forbetween 30 000 and60 000 pages of print-outs. Thiscorresponds to about 15 GB ofspooled data every day.

Or consider the ConcurrentVersions System (CVS) servicewhere software developers storetheir code. This supports nearly1200 developers, and is anabsolutely critical service, sincea major failure could mean theloss of millions of software-development man-hours. Wehave created a second CVSspecifically for LCG, where thereare 23 projects representing400 developers, and work isongoing to create a similardedicated repository for EGEE.

Then there is the EngineeringData Management Service(EDMS). The DES group iscollaborating with TSDepartment’s Control, Safetyand Engineering Databases(CSE) group to deliver thisservice. The DES group providesthe database and applicationserver part. The EDMS-MTF(Manufacturing and Test Folder)keeps, among other things, testresults and configurations forthe LHC magnets. There are616 000 documents and 385 000items of equipment stored inEDMS, with 675 documents and770 items of equipment addedevery day, on average. TheCADENCE cluster, which storesthe electronics designs for some400 systems engineers, is

another example of such aspecialized yet vital service.

What changes and new services doyou see the DES group supportingin the coming months and years?A very important change is theupdate of all computer-aided-design information from theEUCLID system to the newerCATIA technology. There aresome 160 000 designs in EUCLIDtoday, so this represents a hugeeffort, primarily for the designersthemselves, to ensure that thecorrect versions are transferredto the new system. Because theLHC is in a critical phase rightnow, we decided to start themigration to CATIA for only non-LHC designs, and continue withthe LHC data after 2007.

Some new services we aresupporting include Twiki, whichallows groups of people todevelop Web pages jointly andshare information on thesepages, using a form-based Webapplication. There are already30 active Twiki projects and400 users being supported.BOINC, the software platformthat supports LHC@home, isanother service that DES issupporting and furtherdeveloping. Thanks to thisservice, there are more than10 000 public PCs doingsimulations of the LHC beamdynamics for the ABdepartment. And otherapplications in physics arebeing investigated for use onthis platform.

Overall, this is a challengingperiod for a service group likeDES. We are changing theunderlying architecture of ourservices quite radically, whilemaintaining critical services.Forward-looking activities, suchas the CERN openlabcollaboration with Oracle, whichcontinues to develop well, arehelping us to deal proactivelywith some of these changes.And of course, we must continueto listen to the evolving needs ofour users – that is what goodservice is all about.

Computing featured in this month’s CERN Courier

Computing News● Synergia provides new 3D toolSynergia 1.0 simulates particlebehaviour in an accelerator.● Tim Berners-Lee receivesQuadriga awardThe W3C director and WorldWide Web creator is honoured.● W3C specification guidelinesassist authorsQA working group publishes itsguide to clearer writing.

● FZJ’s new supercomputer offersextra powerThe 5.6 Tflop JUBL is installed.

IT products and calendar

Feature article● Internet growth requires newtransmission protocolPossible solutions to high-ratedata-transfer problems.

Viewpoint● The dark side of computingpowerGoogle’s views on computingpower.

These articles appear in theNovember 2005 issue of CERN

Courier. See www.cerncourier.com for full-text articles.

AppleTalk is a legacy Mac OSprotocol for discovering networkfile, print and other services.Internet Protocol (IP) hasbecome a global standard fornetworking, replacing non-IP

networks, such as DECnet, IPX,AppleTalk and others. Followingthis evolution, support forAppleTalk on the CERN networkinfrastructure ceased on3 October 2005.

The replacement solutions are:● SMB/HTTP for file services; and● LPR for print services.

We invite all users who havenot done so already to switch tothe replacement solutions.Instructions on how to do so canbe found at at http://cern.ch/it/

gencomputing/mac-support/AppleTalk.htm.

An early announcement wasmade in the CERN Bulletin

34–35/2005 (“AppleTalkRouting: Phase-Out30 September 2005”).IT/Communications Systems Group

AppleTalk routing is discontinued

Announcements & news

3CERN Computer Newsletter • November–December 2005

As CERN moves into a secondphase of both the LCG and EGEEprojects, after due consultation Ihave decided to make thefollowing organizational changes,effective from 1 November 2005.

The Grid Middleware (GM)group has been merged into theGrid Deployment (GD) group,which Ian Bird is leading. Thisreflects the need to consolidatethe middleware integration,testing, distribution and supporttasks as LCG approaches fullproduction status, and is in linewith the priorities set for thesecond phase of the EGEEproject starting in April 2006.

The focus of the merged groupwill be on support rather than ondevelopment, and the activitywill be labelled in EU terms SA3(support activity) rather thanJRA1 (joint research activity). Afew people will continue to workon development at CERN in JRA1,while INFN will take the lead ofthis EGEE activity. FrédéricHemmer, previously GM groupleader, has been appointed

deputy department head.The Physics Services Support

(PSS) group has been created tocombine the Physics DatabaseServices – which was in theArchitecture and Data

Challenges (ADC) group – andthe Experiment IntegrationSupport section of the GDgroup, as well as ARDA, thedistributed analysis project forthe LHC (see p8). This new

group is led by Jürgen Knobloch.In establishing this group, the ITDepartment aims to ensure thenecessary focus on the mission-critical needs of the LHCexperiments in the run-up toLHC operations in 2007.

The rest of the ADC Group hasbeen integrated into theactivities of other existinggroups. The Linux and AFSsection of the ADC group hasmoved into the FabricInfrastructure and Operations(FIO) group and the openlabactivities previously in ADC havemoved to the DepartmentalInfrastructure (DI) group. BerndPanzer, previously ADC groupleader, is appointed LCG fabricarea manager on a full-timebasis, thus strengthening theteam and increasing the effortavailable for coordination.

Overall, these changesemphasize the department’s keyrole in providing the necessarycomputing and Grid services forthe LHC experiments.Wolfgang von Rüden, head, IT

Department reorganizationemphasizes physics support

Departmental Head OfficeDepartmental Infrastructure (DI)

Infrastructure andGeneral Services

Administrative InformationServices (AIS)

Communication Systems(CS)

Database and EngineeringServices (DES)

Internet Services (IS)

User and DocumentServices (UDS)

Physics Services

Controls (CO)

GRID Development (GD)

Fabric Infrastructure andOperations (FIO)

Physics Services Support(PSS)

Projects

LHC Computing Grid(LCG)

Enabling Grids forE-science (EGEE)

Joint Controls Project(JCOP)

LHC CommunicationsInfrastructure (Com In)

The new structure of the IT Department groups, as of 1 November 2005.

Users who cannot bring or sendtheir computing-accountsregistration form to the UserRegistration office at CERN cannow use the new electronic faxservice and send the form(completed, and signed by theircomputing-group administrator)to the dedicated fax number:69600 (or +41 22 766 9600 fromoutside CERN).

Using the new Fax Servicesprovided by IT/IS, your fax willbe delivered by e-mail directly [email protected]. Toknow more about these servicessee http://cern.ch/fax, and theFAQs at http://mmmservices.web.cern.ch/mmmservices/Help/?kbid=120020.

All the information aboutaccount registration at CERN isavailable at http://cern.ch/it-div/comp-usage.

Registration getsan electronic fax

In today’s computingenvironment, where there is apermanent threat of computer-security incidents, it is vitallyimportant to control who hasaccess to CERN’s computinginfrastructure. To obtain acomputer account it is alreadynecessary to be registered atCERN, but until now closing ofaccounts has relied on anannual review carried out by thegroup administrators. Thisprocess is now being automatedand linked into the CERNregistration database. Theaccount review will runpermanently and trigger actionboth when an account is unusedand when a person’s end ofcontract is approaching.Advance warning of actions tobe taken will be sent to bothusers and their supervisors:● when an account has been

unused for six months – theaccount will be blocked and ayear later, after a second warning,the account will be deleted;● when a person’s associationwith CERN comes to an end –based on the end date in theregistration database, theircomputer accounts will beblocked. For most personnelthere will be a grace periodbefore the account is blocked toallow for the orderly transfer ofdata. This grace period will notapply to contractors. A year later,after a second warning, theaccount will be deleted.

A detailed table can be foundat http://cern.ch/ComputingRules/procedures/account-closure-summary.pdf.

Before this procedure starts torun smoothly a major clean-upof accounts is needed. This willbe done progressively over the

coming months with warningssent by e-mail to the peopleconcerned. The most commonaction will be the blocking ofunused accounts. However themain actions that we expectusers will need to take are:● re-registration with CERN forusers and other collaboratorswho are not often physically atCERN and have let theirregistration lapse;● transfer of ownership of serviceaccounts that are still in the nameof someone who has left.

Initially this procedure willapply only to computer accountsbut we foresee that the principlewill quickly extend to othercomputing-related areas such aswebsites, mailing lists anddevices connected to thenetwork that require a namedresponsible person.Judy Richards, IT/DI

Account closure tightens security

Announcements & news

4 CERN Computer Newsletter • November–December 2005

Computing in High Energy andNuclear Physics (CHEP) is a majorseries of internationalconferences for physicists andcomputing professionals workingin high-energy and nuclearphysics, computer science andinformation technology.

CHEP conferences are heldevery 18 months and provide aninternational forum to exchangeinformation on computingexperience and the needs of thecommunity, and to review recent,ongoing and future activities.

CHEP06 will be organized bythe Tata Institute ofFundamental Research (TIFR),Mumbai, India, and takes placeon 13–17 February 2006.

Pre-registration has startedand can be done either online orby downloading the PDF filefrom the registration page(www.tifr.res.in/~chep06/regdetails.php) and sending thecompleted form by fax or e-mail.The conference fee is reduced

for early registration (before15 December 2005).

There are plans to hold athree day pre-conferenceworkshop on service challenges

for the LHC experiments justbefore the conference. This willbe in the same venue on10–12 February 2006 (Friday toSunday). More information on

this topic is available athttps://uimon.cern.ch/twiki/bin/view/LCG/LCGServiceChallenges.

The conference will focus onthe processing of HEP data at allstages, from the high-leveltriggers that run on farms ofCPUs situated close to theexperiment, through to the finalanalyses that use worldwideresources.

We expect to draw on theexperience from runningexperiments and also to reviewthe status of new studies of thedistributed computing modelsbeing made in preparation for theLHC experimental programme.

For all details (conferencebackground, organizers, topics,programme and preliminaryregistration information) go tothe CHEP06 website atwww.tifr.res.in/~chep06/.

The easiest way to get intouch with the conferenceorganizers is by [email protected].

Register online for CHEP06, to be held in Mumbai in February 2006.

CHEP06 scheduled for Mumbai,India, on 13–17 February 2006

After more than 15 years ofservice, the Computer CentreDatabase (CCDB), which handlesthe computer accountinformation for the IT-managedsystems, will be replaced by theComputing ResourcesAdministration (CRA)application. The benefits will be:● new software technologies,like new relational databasefunctionalities, which willenhance data quality;● added functionality, likesupport for websites, e-mail listsand network devices;● the removal of unusedfunctionality, for instancebudgeting and accounting forCPU usage, and supercomputeraccess rights; and● improved computer security

through a tighter integration withpersonnel data (the HR database).

Integration with the HRdatabase allows us to blockresources automatically once aperson is no longer affiliatedwith CERN (for moreinformation see p3). It will alsoenable us to notify thesupervisor of a person nolonger at CERN, and track otherresponsibilities, besides theownership of accounts thatneed to be transferred.

Support will also be madesimpler. The automatic resourceexpiration mentioned above willreplace the yearly manualaccount-review process. Also,application maintenance issimplified by supporting only oneuser interface – the Web. A

typical end-user will connect tothe application with an AISaccount, which is used for allCERN administrative Webapplications, like EDH and theelectronic salary slip. The end-user can then enter a nickname,change the preferred e-mailaddress, choose the visible e-mail address from the list ofgeneric e-mail addresses, or setthe default Unix shell and/orpersonal homepage URL. Inaddition, a number of self-servicepossibilities will be offered, likecreating additional accounts, andAFS space quota requests.

The first release of CRA isforeseen at the beginning of2006. It will mainly replace thecurrent CCDB functionality andimplement the automatic

account expiration. However, itsdesign is such that new featurescan be easily added inincremental releases over thenext year. A few ideas we havein mind are automatic accountcreation once a person arrives atCERN, password management(synchronization andexpiration), definition ofdynamic groups of people forthe creation of e-mail lists andaccess-control lists (e.g.everyone in the IT-AIS group),and support for other computingresources, like Oracle accountsand software licences.

For more information on theCRA project see the website athttp://cern.ch/ais/projs/cra/welcome.html.Wim van Leersum, IT/AIS

CRA application represents thefuture of account handling

5CERN Computer Newsletter • November–December 2005

Technical briefs

Until September 2005, there wasno easy way to have a Java Webapplication hosted at CERNwithout running a J2EE serveryourself. And running a J2EEserver yourself has a lot ofimplications: keeping up to datewith security patches, upgradingto recent releases, schedulingback-ups, monitoring, etc. Thesituation changed with the recentintroduction of a new IT service:J2EE Public Service. It providescentral server infrastructure forthe deployment of Java Webapplications (servlets/jsp), andit’s meant for medium-size, non-critical applications.

Technically, each user of theservice is assigned a separateJava application server (ApacheTomcat version 5.5), where theycan deploy their Java Webapplication packaged as a war

file. The user later manages theapplication through a Webinterface (see screenshots).

The service is integrated withcentral CERN Web Services,allowing the same rules fornaming websites and the samemechanism for creating ormanaging websites. To create aJava website, please go tohttp://cern.ch/WebServices/and follow these steps:

Step 1● Click “Create Web Site”.

Step 2● Log on using NICEauthentication.

Step 3● Choose the contents categoryof your website and click“Continue Registration”.

Step 4● Choose the name of yourwebsite and click “ContinueRegistration”.

Step 5● Enter a description of yourwebsite and click “ContinueRegistration”.

Step 6● Choose “Java web application(servlet/jsp)” as the type of yourwebsite (fourth bullet – if youcan’t see it, please scroll down)and click “ContinueRegistration”.

Step 7● Read the Service LevelAgreement and CERNcomputing-security rules andclick “I agree” at the bottom ofthe page.

Step 8● Click “Create” on the last page.It takes up to a couple ofminutes to create your website.Then, you can view theproperties of your website anduse the menu on the right tomanage your application.

Step 9● Click “Deploy” to submit yourapplication packaged as a warfile. After that, your applicationwill be deployed on the serverand accessible on the Web. Bydefault, NICE authentication isconfigured, so you can use thismechanism to restrict access toresources in your application.

Please see http://cern.ch/j2ee-public-service/. Make surethe SLA fits your needs and giveour service a try!Michal Kwiatek, IT-DES

Java Web hosting comes to CERN

Choose “Java web application

(server/jsp)” as the type of

your website (step 6).

Use the menu on the right to manage

your application. Click “Deploy” to

upload your war file (step 9).

Projects

6 CERN Computer Newsletter • November–December 2005

Short review of “openlab I”

Principal missionThe CERN openlab for Datagridapplications was initiated during2001–2002 and formallylaunched in January 2003.Leading IT companies that had asolid track record of deliveringadvanced technologicalsolutions were invited to join into evaluate and integratecutting-edge technologiesfocused on the needs of theLCG. Since 2003, CERN openlabhas involved CERN working incollaboration with Enterasys,HP, IBM, Intel and Oracle on athree-year project called theCERN opencluster.

Main project activityThe principal focus so far hasbeen a Grid-enabled computeand storage farm – the CERNopencluster. It is based on100 HP dual-processor Integrityservers, Intel’s ItaniumProcessor Family (IPF)processors, Enterasys’ 10 Gbit/sswitches and a high-capacitystorage system based on IBM’sSAN FS storage subsystem. The10 Gigabit Ethernet technologyis used for both the local-areaand the wide-area connections.Oracle 10g is deployed as therelational database software.

Important results have beenproduced via participation inLCG data challenges and servicechallenges, as well as individualdemonstrations, tests andbenchmarks. Delivery of resultshas also taken other forms, as inthe porting and integration ofthe entire EGEE/LCG-2middleware stack on the 64-bitItanium platform. Furthermore,many requirement documentsand functional assessmentreports have been delivered tothe partners. More informationcan be found in the CERNopenlab annual report.

In most cases the work iscarried out as a directcollaboration with the researchand development arm of thepartner concerned.

Proposal for projects in “CERNopenlab II”

Principal missionThe years 2006–2008correspond to the period of thefull deployment of the LHCComputing Grid. Real data willstart flowing from the LHCdetectors around the middle of2007 but service challenges willsimulate the real load on theGrid right from the start. Theneed to understand and masterthe continuous technologicalevolution will be just asimportant during this period asduring the current three-yearperiod. A natural option in thiscontext is to build upon theexpertise and reputationestablished by openlab I.

The mission of openlab II willcontinue to be a closecollaboration with majortrustworthy industrial partnersable to demonstratetechnological leadership anddelivery of viable solutions. Inaddition, closer collaborationwith the European Gridinfrastructure project EGEE willbe formalized, by takingadvantage of the fact that EGEEenters a second phase in May2006, called EGEE-II. EGEE iscurrently the largestmultiscience Grid-deployment

project in the world, with70 institutional partners inEurope, the US and Russia.

Main project activitiesBased on the openlab workshopIndustrialising the Grid, held inJune 2005, a number of topicswere defined as of interest toCERN, the EGEE project and theopenlab partners. Two specificthemes were singled out asbeing of particular interest tosome of the current openlabpartners. These were: ● a Grid interoperability andintegration centre (GIC) withclose links to EGEE-II; and● a platform competence centre(PCC) with the main focus on thePC-based computing hardwareand the related software.

GICThe GIC is proposed as areinforcement of the activitieswithin EGEE-II, and will allow thepartners to take part in theintegration and certification ofGrid middleware. Partners willbe expected to contributehardware for a local test-bed, aswell as suitable remote Gridresources.

The project will focus on thefollowing activities:● testing and certification –encompassing tests of the EGEE-II stacks on test-beds providedby the partners;● support, analysis, debuggingand problem solving – dependingon the problems encountered onthe contributed test-bed;● interoperability – reviewingcurrent levels of interoperabilitywith industrial middleware stacksproposed by the partners, aimingto strengthen them and improvestandardization of interoperableGrid middleware in general; and● capture of requirements –based on all the other activitiesalready mentioned.

PCCThe PCC will address a range ofimportant fields, such asapplication optimization andplatform virtualization.

Software and hardwareoptimization is seen as a vitalpart of the Grid deployment,since the demand for resourcesby the scientists is very likely tooutstrip the available resources,even inside the Grid. Suchoptimization relies on deepknowledge of the architecture ofthe entire computing platform.On one hand this covershardware items, such asprocessors, memory, buses andinput/output channels. On theother hand it covers the abilityto use advanced tools, such asprofilers, compilers and linkers,specially optimized libraryfunctions, etc.

Platform virtualization canallow Grid applications to enjoya highly secure andstandardized environmentpresented by a “virtual machinehypervisor”, independent of allthe hardware and softwareintricacies. The virtualizationconcept, initially offered aspurely a software solution, willgradually move into hardware,allowing greatly improvedperformance of both thehypervisors and its “guests”.

These two themes will, ofcourse, not exclude otheractivities being launched in theopenlab framework.

ConclusionDuring the first three years ofexistence, the CERN openlab hasdemonstrated the ability ofCERN to work successfully withkey industrial partners on thetest and validation of newtechnologies. In the proposedsecond phase of CERN openlab,a formally defined relationshipwith EGEE-II will increase theimpact of the openlabpartnership beyond LHCcomputing to the wider domainof Grid-based scientificcomputing. Two projects for thesecond phase, the GIC and thePCC, are at an advanced stage ofplanning with several existingpartners, and there is scope forfurther projects.Sverre Jarp, IT/DI

openlab II gets ready to buildon the success of openlab I

Sverre Jarp, chief technology

officer of the CERN openlab.

Conference report

7CERN Computer Newsletter • November–December 2005

This year’s CERN School ofComputing (CSC2005) hasbroken a record. The school tookplace in Saint Malo on4–17 September, and attracted79 participants of 27 differentnationalities, overtaking the2003 record of 25 nationalities.

Breaking nationality records isnot a goal in itself, but it is asign of the vitality and theincreasing universality of CSCs.In fact, this year, in addition tomost of the CERN memberstates, participants came fromArmenia, Cameroon, China,Egypt, India, Iran, Korea,Mexico, Serbia and Montenegro,Russia, Turkey and Ukraine.

Such a rich and diverseattendance is made possible bythe support that the school hasreceived since 2004 (and willcontinue to receive for a periodof four years) from the EuropeanUnion, which provides grants tocover part or all of theparticipants’ costs. This year,the selection process wasparticularly difficult as thecommittee received almost160 applications for only80 places.

The school was jointlyorganized by CERN and theFrench CEA-Saclay Centre, whichproposed the venue in Saint

Malo and provided thecomputing environment. CERN’sdirector-general Robert Aymarvisited the school and metstudents, had lunch with sevenof them whose names weredrawn from a hat, and attendeda special session in whichyoung participants presentedtopics of their choice after anon-site contest.

The tuition programme hadthree themes – GridTechnologies, SoftwareTechnologies and PhysicsComputing – and comprised 50 hof lectures and hands-onsessions. There were 10 lecturersfrom six different organizations.

One novelty of last year’sschool was the launch of theidea of an inverted school

(iCSC), “where students turninto teachers”. The first iCSCtook place at CERN in February2005. In addition to the 10senior lecturers at CSC2005, twoof the young lecturers from iCSCwere invited to deliver speciallectures in Saint Malo.

One of the 2005 novelties wasthe CSC booklet, acomprehensive handbookavailable at the beginning of theschool, gathering organizationaland programme material, as wellas the lecture hand-outs.

The school concluded with thenow-traditional optionalexamination – now in its fourthedition – to which 70 studentsregistered. Sixty-one of thempassed and received the formalCSC2005 certificate of credit.Congratulations to all, with aspecial distinction to MarekBiskup, Anselm Vossen andNuno Santos, who obtained thebest marks.

Let’s hope that this session’sstudents will respond asenthusiastically as theirpredecessors to this iCSC, sothat some of them may alsocontribute to CSC2006, which isto be held in Helsinki.● For more information on CSCsee http://cern.ch/csc.François Fluckiger, IT/DI

CSC2005 is a record breaker

Participants at CSC2005. IT department leader Wolfgang von Rüden stands in the centre at the front, with CERN’s director-general on his right.

The record-breaking audience in the computing room at CSC2005.

LCG news

8 CERN Computer Newsletter • November–December 2005

The ARDA project (A Realisationof Distributed Analysis for LHC)was created within the LCGproject in 2004 to develop aprototype analysis system for theexperiments at the Large HadronCollider (LHC). The guiding ideabehind ARDA is that exploring theopportunities and the problemsencountered in using the Grid forLHC analysis would provide keyinputs for the evolution of theEGEE gLite middleware.

Enabling a large, distributedcommunity of individual users orsmall groups to use the Gridwithout central control stressesthe infrastructure in a radicallydifferent way compared withlarge-scale, continuousproduction activities, such asgenerating simulated data orevent reconstruction. Productioncan be regarded as a single-useractivity and the emphasis is onmaximizing the CPU utilizationover long periods. All activityshould be carefully logged. Inthe analysis environment,multiple users compete forresources and the latencybetween issuing a task and theavailability of the first (partial)results becomes important.Often, the latency will be limitedby I/O capabilities more thanCPU power, like in scanningevents or performing furtherevent selection.

ARDA started in parallel withthe EGEE project, which aims toprovide a dependable Gridinfrastructure for users fromdifferent scientific domains. InEGEE, the LHC experiments anda wide community ofbiomedicine applications havethe key role of driving theevolution of the infrastructure.They are also major users of thecomputing infrastructure, forexample with the LHCexperiments’ data challenges.

The LHC experiments alsocontribute to the Grid evolutionthrough initiatives like ARDA(jointly funded by LCG andEGEE). This provides theopportunity for exploring newideas and prototyping advancedservices not yet provided by theinfrastructure, which might leadto the development ofinnovative applications.

Fostering innovation is a keyelement: the understanding ofanalysis activities in the LHC erais still evolving very quickly, sowe need to retain flexibility. Weremember, for example, how theanalysis paradigm changedduring the LEP years. While thehardware evolution is not underour control, the HEP communitypartially drives the way in howthe computing and networkingresources are organized (as doother cutting-edgedevelopments, such as gLite).The patterns observed in thepast suggest that the new Gridinfrastructure will enable andstimulate new approaches todata handling and analysis, withthe ultimate goal of enabling alarge scientific community tomaximize the scientific output ofthe LHC programme. However, asound approach is needed forsuch an evolving infrastructure;therefore the future systems areprototyped together with theusers, exposing them early tothe Grid environment, anddiscussing the evolution on thebasis of their experience.

Testing the Grid under realconditions gives effectivefeedback to the developers ofGrid middleware and operators ofthe Grid infrastructure. Duringthe first phase of EGEE, ARDA

played a key role in testing themiddleware, with its access to“previews” of gLite components.This activity progressively movedtowards detailed studies ofperformance issues, but alwaysusing the analysis scenario as aguideline. As an example, theexperience and requirements ofthe LHC experiments led ARDA topropose a general interface formetadata access services. AnARDA prototype called AMGA(ARDA Metadata Grid Application)has been integrated into the gLitemiddleware and is now also usedby non-HEP applications.

The LHC experimentswelcomed the idea of theprototype activity agreed withARDA. First, this is compatiblewith the physicists’ approach ofbuilding a series of prototypesto control and guide theevolution of their instruments(from detectors to software).Second, ARDA helps to createthe critical mass in theexperiments’ distributed-analysis teams by adding a smallbut dedicated set of experts.

From the beginning, it wasdecided to propose anindependent prototype activity toeach LHC experiment. It wasconsidered unrealistic to forcecommonality in the use of toolsat an early stage, since each

experiment has different physicsgoals and data-organizationmodels. However, all thedifferent activities hosted in theARDA team benefit from commonexperience and cross-fertilization.

The ARDA-LHCb prototypeactivity is focusing on theGANGA system (a joint ATLAS-LHCb project). The main ideabehind GANGA is that thephysicists should have a simpleinterface to their analysisprograms. GANGA allowspreparing the application toorganize the submission andgathering of results via a cleanPython API. The details neededto submit a job on the Grid (suchas special configuration files)are factorized out and appliedtransparently by the system. Inother words, it is possible to setup an application on a portablePC, run some higher-statisticstests on a local facility (like LSFat CERN) and then analyse allthe available statistics on theGrid by just changing theparameter that identifies theexecution back-end.

The ARDA-CMS activity startedwith the comprehensiveevaluation of gLite and existingCMS software components.Eventually ARDA focused onproviding a full end-to-endprototype called ASAP,

ARDA prototypes physics analysis

The different components of the CMS analysis system being assembled within the LCG-CMS task force.

Clockwise from top left: a screenshot from MonaLisa, collecting information from the running jobs; the

report page of the ASAP prototype; a summary of the CMS-CRAB user activity; and three screenshots of the

CMS dashboard showing user/ job activity.

LCG news

9CERN Computer Newsletter • November–December 2005

prototyping some advancedservices, such as the TaskMonitor and the Task Control.The Task Monitor gathersinformation from differentsources: MonaLisa (providingmainly run-time information viathe CMS C++ framework COBRA);the CMS production system; andGrid-specific information(initially the gLite/LCG Loggingand Bookkeeping and recentlythe R-GMA system).

The Task Control implementsCMS-specific strategies, makingessential use of the Task Monitorinformation. Task Controlunderstands the user tasks(normally a set of jobs) andorganizes them so that the usercan delegate many tasks, e.g.the actual submission (the userregisters a set of tasks and thendisconnects) and error recovery.Some key components of thisvery successful prototype, whichincorporated a lot of feedbackfrom real users, are now beingmigrated within the official CMSsystem CRAB in the framework ofthe CMS-LCG taskforce.

The ATLAS work started with aseries of exploratory activities,from contributions to specificcomponents of the ATLASproduction system, to theinvestigation of existing systemslike DIANE (a tool for parallelanalysis), as well as middlewareevaluation. This activitycontinues in the framework ofthe ATLAS-LCG taskforce.Recently ATLAS has emphasizedthe usability of the distributed-analysis system and ARDA isdirectly involved in this activity.For example, DIANE wasdistributed to pilot users forevaluation, receiving goodfeedback. The usage of GANGAin ATLAS has beendemonstrated and the firstphysicists are starting to use it.

The ALICE prototype is anevolution of the earlydistributed-analysis prototypesmade by the experiment usingtheir AliEn system, incorporatingfeatures from the Parallel RootFacility (PROOF) system andproviding the user with accessvia the ROOT/ALIROOT prompt.

Close integration with thestandard (local) analysisenvironment can be obtainedonly by carefully designing thegateway into the distributedsystem.

An important component,developed within ARDA, is theC Access Library. This optimizesthe connection to the Gridinfrastructure via anintermediate layer that cachesthe status of the clients (inparticular their authorization)and therefore helps to optimizethe performance as required forinteractive use. Operations likebrowsing the file catalogue orinspecting a running job can beprovided via mechanisms alreadyknown to ALICE users (shellcommands and ROOT system).

ARDA is a direct and veryactive contributor to the EGEEproject. EGEE benefits fromexchanging ideas and experiencewith groups that actively supportscientific communities, notablythe LHC experiments andbiomedicine. Being able to mapconcepts and strategies

developed in one particularapplication domain to othersciences is a powerful indicatorof an improved theoreticalunderstanding of Gridtechniques. Each applicationdomain can contribute with itsspecific requirements andknowledge, and improve thesystem as a whole.

The different prototypeactivities in ARDA, together withother activities within theexperiments, are converging ona first version of the distributed-analysis systems, which will beused in the first phase of LHCoperation. In the second phaseof EGEE we expect to streamlinethis activity to further supportthe experiments’ systems forboth production and analysis.This means also continuing toinfluence the evolution of themiddleware and the Gridinfrastructure, using larger-scale experience and fosteringthe contacts with non-HEPscientists established duringthe first phase.Massimo Lamanna, IT/PSS, CERN

PIC, the “Scientific InformationHarbour”, in Barcelona givessupport to scientificcommunities working in projectsthat require massive dataprocessing and data storage.Since its creation in 2002, PIChas been collaborating closelywith the LHC Computing Grid(LCG) as a Tier-1 centre andcoordinates the participation ofthe seven research institutions(CIEMAT, IFAE, IFCA, IFIC, UAM,UB and USC) that constitute theSpanish contribution to the LCG.Manuel Delfino, PIC initiator anddirector, also coordinates theSouth-West Europe federation of the EU EGEE GridInfrastructure project.

PIC is co-financed by nationaland regional governmentsthrough CIEMAT and IFAE and isequipped with solid data-processing and networkingresources. PIC is hosted by theUniversitat Autònoma deBarcelona, which maintains thephysical infrastructure, includinga 500 kVA motor generator.Within a very short time, PIC has

become an important centre ofreference for the implementationof the GRID and the developmentof related technologies in Spain.Its team of 18 engineers,technicians and researchers areworking in manifold applicationcollaborations to endowinternational research projectswith an extremely powerful andcost-saving tool.

The automated mass-storagesystem at PIC is based on aStorageTek L5500 silo with acapacity for 6000 tapes,purchased in 2003. At presentthe slots are equipped with200 GB cartridges. The fiveinstalled 9940 B tape units allowa sustained transfer of 30 Mbit/sthrough a Fibre Channel link,with a top speed of 150 MB/sburst transfer or 1.2 Gbit/s. Thecurrent maximum storagecapacity is 1.2 PB given thenumber of slots and cartridgecapacity. Space for a secondrobot is reserved in the 150 m2

machine room.At present, 10 projects are

storing data on the PIC mass-

storage system, with a 40%share for LHC experimentsinvolving Spanish groups (Atlas,CMS and LHCb). Other physics-related collaborations storingdata at PIC are the AlphaMagnetic Spectrometer (AMS –to be installed in Earth orbit),MAGIC (an astrophysicstelescope located at La Palma,Canary Islands) and K2K (aneutrino experiment in Japan).Since 2004 digital medicalimages coming from the UDIATcentre at the Parc Taulí Hospitalare being stored at PIC formedical research. PIC alsostores data for molecularbiology studies, medical drugresearch and turbulent fluiddynamics simulations.

During summer 2005 the trafficof digital data flowing to PIC fromdifferent research centres hasrisen considerably, due to theservice challenges of the LCGproject and the beginning of datatransfer from a fluid-dynamicsresearch project that activelyuses the Barcelona SuperComputer (MARE NOSTRUM).

Sustained data transfersrelated to the LCG project havetaken place from CERN to PIC,starting in July 2005. Up to 5 TBof data each day at an averagespeed of 60 MB/s were attained.The test, coordinated by DrGonzalo Merino, was designedto verify that the Gridmiddleware allows theautomated transfer of largeamounts of data.

By 2008, when the LHCexperiments will be in full dataproduction, they alone will storemore than 1 PB of data at PICeach year. The L5500 silo isconnected to a mass-storagesystem based on CASTOR, anopen-software product developedat CERN with contributions fromengineers at PIC and CNAF (Italy).CASTOR lets users write to tapesas if they were writing to theirhard disk. Some 40 TB of storagedisk space are currently used,around 10% of the total tapecapacity. This storage system willhave to be scaled upsubstantially in the future.Stéphanie Buchholz, PIC

Barcelona’s Port d’Informació Científica: a centreof excellence for scientific-data processing

10 CERN Computer Newsletter • November–December 2005

Physics computing

PC hardware evolves.Applications evolve. Linuxevolves. Linux distributionsevolve. CERN Linux moves alongwith the rest of the crowd, tryingto match the release schedulesof the operating system (OS)and system software (AFS,KDE/GNOME, etc) to ourexperiments and user-community timelines, and soevery 12–24 months a newrelease needs to be broughtforward at CERN. This articlelooks at the planning for theLinux release that will be usedat the LHC start-up.

The CERN Linux currently indistribution is Scientific LinuxCERN 3 (SLC3). It is based onScientific Linux 3 (SL3), an HEP-wide collaboration withFermiLab and CERN as majorcontributors. SL3 is itself basedon the freely available Red HatEnterprise 3 sources.

SLC3 was certified inNovember 2004, after sixmonths of validation by the

various CERN Linux usercommunities. SLC3 still supportsmost of the hardware arriving atCERN, and currently has veryfew functional deficiencies. Butover time, newer hardware willrequire updates, and newersoftware versions will offer shinynew functionalities that CERNusers may crave. So eventually,SLC3 will need replacing – butwhen and with what?

In 2007, LHC will start up.Given the complexity of all itscomputer-driven components,introducing a new Linux releasein the middle of start-up isconsidered unwise. Workingbackwards from the variousdeadlines, the last opportunityfor changing Linux versions willbe around October 2006.Whatever Linux version iscertified at that time will stay inplace for several years.

Possible candidates are SLC4,derived from Red Hat Enterprise4 (released in February 2005) viaSL4 (released April 2005), and a

future SL(C)5 based on the yet-to-come Red Hat Enterprise 5. Inthe third quarter of 2006, theSLC4 codebase will be about24 months old (quite old by Linuxstandards). SLC5 will be fresher(e.g. it will support more recenthardware and have nicer/newerversions of application software),but no firm release date isknown. So there is a risk that theformal certification of SLC5 couldbe delayed beyond the cut-offdate in October 2006.

In order to keep SLC5 as anoption, but still have a reliablefallback solution in case ofdelays with it, the LXCERTcoordination group decided inMay 2005 to fully certify SLC4until the end of 2005, but not todeploy it widely on CERNmachines except for hardwarethat was too new to besupported by SLC3 (e.g. certaindesktop and notebook models).

Assuming its successfulcertification, SLC4 will beavailable and supported at CERN

during 2006. It will supportnewer hardware, and mayperform better than SLC3 forspecific applications. But for themoment it does not have alonger support horizon thanSLC3, so should only bedeployed in areas where anotherre-installation a few monthslater is acceptable.

Instead, an attempt will bemade to certify SLC5 in 2006. Tospeed up the certificationprocess, it has been decided tosplit the basic OS validation(hardware and typical userapplications) from the compilercertification (which concernsmostly the physics software),and merge the OS and compilerlate in the process.

Only if SLC5 is delayed willSLC4 become the new productionversion, and SLC3 will be madeobsolete shortly afterwards. IfSLC5 certification happens ontime, SLC4 will be made obsoletetogether with SLC3.Jan Iven, IT/FIO

Fig. 1: the two OS candidates and their respective system compilers (gcc-3.4.3 and gcc-4), together with the planned certification timelines.

CERN explains its Linuxstrategy: SLC4 or SLC5?

2005 20072006?

LHC start-up release

Red Hat Enterprise 5Scientific Linux 5

gcc-4.x

SLC5 merge

FedoraCore5 beta

Scientific Linux 4

gcc -3.4.3

SLC4 merge

Physics computing

11CERN Computer Newsletter • November–December 2005

Regular users of the IT-FIO batchservices (see http://cern.ch/it-dep-fio-fs/) will remember amajor reduction in capacity inJune and July as more than700 dual CPU nodes wereallocated for ATLAS tests. Thisarticle explains the backgroundto these tests and how importantour new cluster-managementsoftware was for their smoothrunning – and the return tonormal service afterwards.

As part of their preparation forLHC data taking, the ATLASonline team needed a largenumber of nodes to performfunctionality and verificationtests of the planned DAQ farmand High Level Trigger systembefore moving ahead withpurchase and installation for2007. In particular, they neededindividual subsystem tests forDAQ and Event Building Systemsand then the Level 2 Trigger andthe Event Filter system. At fullscale, the ATLAS DAQ/HLT farmwill comprise some 2000 nodes,so at least 600 nodes wereneeded for realistic tests. Whereto find these today? The1400 node LXBATCH clusterseemed the perfect solution.

However, the ATLAS onlineteam had many specialrequirements for the machinesranging from fine tuning of theoperating system and networkconfiguration to the installationof specific ATLAS DAQ, HLT andmonitoring software and theconfiguration of Apache Webservers. Ordinarily, making allthe necessary changes across

700 machines would be a time-consuming process with a highrisk of errors if some of thenodes were not configuredcorrectly – and, moreover, moretime would need to be spentreconfiguring the systems afterthe tests.

Fortunately, the ExtremelyLarge Farm management system(ELFms, see http://cern.ch/elfms/) that has been

introduced by IT to manage thefuture large-scale offline farmsfor LHC came to the rescue.

Two dedicated RPM packagescontaining the ATLAS softwarewere installed into SWREP, theQuattor (see http://cern.ch/quattor/) software repository.The requirements of the ATLAStest cluster, for both installedsoftware and operating systemconfiguration, were then

entered into CDB, the Quattormachine database that controlsalmost all systems in CERN’sComputer Centre.

The test schedule involved asteady ramp-up of the numberof nodes allocated to ATLAS,allowing any final bugs orproblems with the code to beironed out with minimaldisruption to normal services. Asmore nodes were needed, theywere moved logically in CDBfrom the LXBATCH cluster to theATLAS test cluster. The fullpower of the ELFms tools thentook over with the LEAF suite(see http://cern.ch/leaf/) andLEMON monitoring information(see http://cern.ch/lemon/),ensuring the nodes were drainedof running batch jobs andreadied for reinstallation.

At this stage the Quattorinstallation and configurationtools took over, reinstalling theoperating system according tothe ATLAS requirements. Whenthe tests were complete, theprocess was reversed – thenodes were moved logicallyback to the LXBATCH cluster inCDB and then the ELFms toolstook care of reinstalling the 700nodes with the LXBATCHconfiguration and returningthem to service in just 36 h!

As well as being vital for theATLAS online team the tests werealso a useful exercise for the FIOgroup as we look ahead to thechallenge of managing the Tier-0and Tier-1 clusters at CERN.Thorsten Kleinwort and VéroniqueLefébure, IT/FIO

Cluster-management helpsATLAS tests run smoothly

The decrease of LXBATCH capacity during the ATLAS TDAQ challenge

period can be seen above, starting on 6 June 2005 and ending on

22 July 2005 when all the resources where moved back to LXBATCH.

2500000

2000000

1500000

1000000

500000

0

18–0

5–04

15–0

7–04

11–0

9–04

08–1

1–04

05–0

1–05

03–0

3–05

30–0

4–05

27–0

6–05

date

batc

h ca

paci

ty

SLC3Red Hat 7.3

The CNL editorial team wishes you a very merry Christmas

and a happy New Year!

12 CERN Computer Newsletter • November–December 2005

Information corner

The User Assistance Team inIT/UDS maintains a database forQuestions and Answers (Q&As)that have been dealt with by theComputing Helpdesk. Thisprovides many tips on dailycomputing issues. You cansearch the database athttp://consult.cern.ch/qa/.

Below are two questions thatwere raised recently at theHelpdesk (http://consult.cern.ch/qa/3937 and http://consult.cern.ch/qa/3938).

Procedure for broken PC underguarantee

QuestionI have a broken PC that is stillunder guarantee. What is theprocedure?

AnswerContact the Helpdesk who willsend a technician to carry out aproper diagnostic. If there is ahardware fault the PC must bereturned to the manufacturer.The technician will help you tocomplete the necessary returnform, which triggers a transportrequest to collect the machine.

When repaired, the machinewill be delivered back to theuser with a report on the workdone. If the user needs morehelp to configure the machinethen the Helpdesk will assist.

In case of an excessive delaythe user should contact the

CERN stores (Didier Colombarinior Isabelle Mardirossian).

Related links● Stores fulfil computing orders(CNL September–October 2005).

Electronic Fax Service (cern.ch/fax)

QuestionI read about the new Fax Servicein CERN Bulletin 38/2005 (seehttp://bulletin.cern.ch/fre/egeneral.php?bullno=38/2005&base=gen#Article6).However, my registration failedbecause my “NICE account wasineligible for the service”.

Could you explain why, andhow I can become eligible?

AnswerOnly CERN staff members,project associates and fellowscan register to use the service. ANICE account is required to login to https://cern.ch/mmmservices/tools/fax, whereregistration is performed. If youget the error message “Youraccount is not eligible to FaxServices”, then your NICEaccount is not related to anofficial CERN staff member/associate/fellow.

You can still register if youhave a professional need relatedto your CERN work. To do soyou, or preferably the person atCERN responsible for you,should e-mail the Helpdesk.

“Service accounts” can also beregistered (to allow secretariesto get an electronic fax number)by e-mailing [email protected].

Please note that IT/IS iscurrently reviewing the situationto understand the effectiveneeds for this service and itsuser community.

Related links: ● Using Fax Services(http://mmmservices.web.cern.ch/mmmservices/Help/?kbid=120001)● FAQ about Fax Services(http://mmmservices.web.cern.ch/mmmservices/Help/?kbid=120020).

Questions and answers from the Helpdesk

Other general-interest Q&As and their corresponding websites

Windows (NICE – Office) related● http://consult.cern.ch/qa/3921 sftpdfs – Name or service not known – Connection reset by peer● http://consult.cern.ch/qa/3908 Weird folders in local drive after installing Windows updates● http://consult.cern.ch/qa/3932 Take notes directly into a PowerPoint presentation● http://consult.cern.ch/qa/3913 Windows taskbar icon disappears regularly● http://consult.cern.ch/qa/3960 Visio 2003● http://consult.cern.ch/qa/1596 Copy/paste from/to exceed

Unix (AFS-Lxplus/Lxbatch) related● http://consult.cern.ch/qa/3934 ssh key and passwordless access to Lxplus● http://consult.cern.ch/qa/0011 Using ssh/scp from batch/cron jobs● http://consult.cern.ch/qa/3728 Problem with command kpasswd● http://consult.cern.ch/qa/3957 Usage of Lxbuild

Miscellaneous● http://consult.cern.ch/qa/1076 Wrong data and information in CERN phone book● http://consult.cern.ch/qa/3919 Web – Java application JSP/Servlet

Up-to-date and patchedsoftware helps minimize the riskof your computer being infectedby a virus or attacked by ahacker. In most cases, makingthe update process automaticcan only help.

At CERN you have an easyoption – let your Windows PC bemanaged by the NICE service orprofit from the automatic updateoption that is set to “on” bydefault in the CERN standardLinux desktop installation. This

will ensure that the operatingsystem and CERN-supportedapplications are kept up to date.

At home it requires a bit ofinitiative on your part but theamount of software offeringautomatic updates is steadilyincreasing. The CERN licence foranti-virus software fromSymantec allows you to installand update the software on yourhome computer, as well asunmanaged computers at CERN(see http://cern.ch/antivirus).

Did you know about MicrosoftUpdate, which provides patchesnot only for Windows, but alsofor other Microsoft products,such as Microsoft Office, Visio,Project and Publisher? If you arealready using Windows Update,open it, click on the “News” item“Upgrade to Microsoft Update”in the lower right-hand cornerand follow the instructions. Ifyou are not already usingWindows Update go tohttp://update.microsoft.com/

microsoftupdate. Finally, checkthat you have a firewall active –from the Control panel go toSecurity Centre.

If you are concerned thatautomatic updates may gowrong and damage your system,remember that you are one ofmany millions who will usethese update systems and themore standard and up to dateyour system is, the smaller therisk of something going wrong.Judy Richards, IT/DI

Keep your software updatedand stay ahead of viruses

13CERN Computer Newsletter • November–December 2005

Information corner

November30 – 2 Dec IFIP InternationalConference on Network andParallel Computing (NPC 2005)Beijing, China,http://grid.hust.edu.cn/npc05

December5–8 IEEE InternationalConference on e-Science and GridTechnologies Melbourne,Australia, www.gridbus.org/escience

7–9 International Symposium onParallel Architectures, Algorithms,and Networks (I-SPAN)Las Vegas, NV, US,http://sigact.acm.org/ispan05/

2006

February13–17 15th InternationalConference on Computing in HighEnergy and Nuclear Physics(CHEP06) Mumbai, India,www.tifr.res.in/chep06/

March1–3 TridentCom 2006Barcelona, Spain,www.tridentcom.org

April3–7 HEPiX Spring 2006 CASPUR,Rome, Italy, www.caspur.it/en/

Calendar

Spammers commonly spreadviruses by sending e-mails thatseem to contain importantinformation and that appear tobe sent from a service manager’saccount. The screenshot aboveshows such a message thatCERN users have received, i.e. amail from the registration

service, supposedly providingimportant information aboutyour computer account.

You must immediately deletethese e-mails from your mailbox,and never open the attachment.None of the CERN servicesmanagers will ever send anattachment. If information has to

be separated from the message(for instance because it is toolong) then, in most cases, youwill be referred to a CERN-registered website (containingcern.ch in the URL). If in anydoubt ask the Helpdesk beforetaking any action (call 78888 ore-mail [email protected]).

IT services e-mails donot contain attachments

The deadline for submissionsto the next issue of CNL is

3 January 2006E-mail contributions [email protected]

To be e-mailed when a new issueof CNL is available, subscribe tothe mailing list cern-cnl-info. Youcan do this from the CERN CNLwebsite at http://cern.ch/cnl.

You may have noticed that the ITService Status Board (SSB) hasbeen split into two pages: theusual page at http://cern.ch/it-servicestatus containsinformation written by the ITmanager on duty at the requestof all IT services providers, toinform users about a serviceincident, a scheduledintervention or a service changethat was agreed and planned in

advance. There is now a secondpage collecting all the“dynamic” information includingthe current status of mostservices, as provided by theservices responsible (e.g. theTVScreen provided by IT/FIO, orthe AIS news from theAdministrative InformationServices). This dynamic page iseasily accessible from the SSBvia a link, or can be accessed

directly at http://cern.ch/it-support-servicestatus/default-dynamic.asp.

If you have information forthese pages please contact theIT manager on duty [email protected].

As usual, changes to servicesin the IT department arepublished on the SSB. The mostrecent changes and their datesof posting are shown below.

Service Status Board gets dynamic

15 September Restrictions on the number of database sessions for DEVDB service (19 September)22 August HR11i upgrade – Jinitiator to be reinstalled (22 August)19 August AppleTalk Routing: Phase-out 30 September 2005 (30 September)9 August CASTOR client upgrade on LXPLUS/LXBATCH/LXGATE/LXBUILD – libshift.a (9 August)28 June Wide deployment of the CASTOR client software (4–5 July)25 April Rundown of SUNDEV facility (1 July)27 June Secure external access to CERN’s services to replace VPN (June)20 June End of Red Hat 7 batch and interactive services (end of June)

CERN users can find computerand physics books and CDs atdiscount prices at the UserSupport Bookshop in Building52 1-05 in the Central Library(since July 2005). The shop isopen weekdays 8.30 a.m. –12.30 p.m. and can also becontacted by e-mail ([email protected]) or phone (74050).

Books are bought from some15 publishing houses, and usersare very welcome to suggestacquisitions. The catalogue canbe found at http://cdsweb.cern.ch/tools/itbook.py and nowincludes physics books.The most recent computingacquisitions include:● Simply C++ by Deitel, Choffnes

and Kelsey;● C++ Primer (fourth edition) byLippman, Lajoie and Moo● Learning Java (third edition) by

Niemeyer and Knudson● Home Networking: The MissingManual by Lowe● Oracle PL/SQL Programming(fourth edition) by Feuersteinnand Pribyl● Learning Perl (fourth edition) bySwartz, Phoenix and Foy● Coding Theory – A First Course byLing and Xing.Roger Woolnough and Jutta Megies,IT/UDS

News from the IT Bookshop