a methodology to compare and adapt e-learning in the global context (pawlowski & richter 2008)

13
A Methodology to Compare and Adapt E- Learning in the Global Context Pawlowski, J. M. 1 , Richter, T. 2 1 Information Technology Research Institute, University of Jyväskylä, Fin- land, E-Mail: [email protected] 2 Korean German Institute of Technology, Seoul, Korea, E-Mail: [email protected] Abstract: In this paper, we present a solution how to test cultural influ- ences on E-Learning in a global context. Based on a metadata approach, we show how specifically cultural influence factors can be determined to transfer and adapt learning environments. We present a method how those influence factors can be validated for both, to improve the dynamical meta-data specification and to be used in the development of (interna- tional) E-Learning scenarios. Introduction In this paper, we focus on a methodology to test influence factors on E- Learning within a globally distributed setting. Therefore, based on a meta- data approach, we show how specifically cultural influence factors can be determined. Those have to be considered during the adaptation process when learning environments are transferred between different contexts.

Upload: richter-thomas

Post on 14-Dec-2014

22 views

Category:

Science


2 download

DESCRIPTION

Pre-Publish version of (Presented at the MKWI 2008 in Munich and eventually published at): Pawlowski, J.-M., & Richter, T. (2010). A Methodology to Compare and Adapt E-Learning in the Global Context. In: Breit-ner, M.H. (Ed.), E-Learning 2010 – Aspekte der Betriebswirtschaftslehre und Informatik. Physica-Verlag HD, Berlin, pp. 3-14.

TRANSCRIPT

Page 1: A Methodology to Compare and Adapt E-Learning in the Global Context (Pawlowski & Richter 2008)

A Methodology to Compare and Adapt E-Learning in the Global Context

Pawlowski, J. M.1, Richter, T.2

1 Information Technology Research Institute, University of Jyväskylä, Fin-land, E-Mail: [email protected]

2 Korean German Institute of Technology, Seoul, Korea, E-Mail: [email protected]

Abstract: In this paper, we present a solution how to test cultural influ-ences on E-Learning in a global context. Based on a metadata approach, we show how specifically cultural influence factors can be determined to transfer and adapt learning environments. We present a method how those influence factors can be validated for both, to improve the dynamical meta-data specification and to be used in the development of (interna-tional) E-Learning scenarios.

Introduction

In this paper, we focus on a methodology to test influence factors on E-Learning within a globally distributed setting. Therefore, based on a meta-data approach, we show how specifically cultural influence factors can be determined. Those have to be considered during the adaptation process when learning environments are transferred between different contexts.

Page 2: A Methodology to Compare and Adapt E-Learning in the Global Context (Pawlowski & Richter 2008)

2

E-Learning has become an issue of global importance. Higher education institutions and educational organizations compete on a global educational market. On the one hand, learning environments (such as E-Learning products and services) might be developed in a distributed setting (e.g., concepts from Germany, software development in India). On the other hand, learners are often distributed around the globe in international train-ing programs. Another case is the export of existing materials. In this case, learning environments have to be transferred and adapted to a new context, e.g., a different culture.

Those settings require a careful analysis of the context in which a learn-ing environment shall be used. In this paper, we present an approach how to represent context and culture as metadata. As for most countries espe-cially learner-related data are not available, we focus on a test method to determine learner-related cultural influence factors and their impact within learning processes.

Global E-Learning

E-Learning in the global context depends on a variety of influence fac-tors. First of all, global E-Learning includes different meanings. It depends on the location of providers and learners. Typical cases are: • E-Learning export: learning scenarios are developed in one and ex-

ported to another country. • Distributed E-Learning: Both, developers and learners are distributed

around the globe. • Distributed E-Learning production: Learning scenarios are developed

around the globe (as an equivalent to global software development [Karo1998]) and the outcome is used in a single country.

Generally, several influence factors are similar to software development

processes in a global setting. A variety of those have been discussed: cul-tural influence factors [Kruc2004, KeKR2002], team management / knowledge exchange [DaMa2001, ChRo2004, Karo1998], or communica-tion [DaMa2001, Denm2003].

In the field of E-Learning, those influence factors are refined. SEUFERT

describes three dimensions to distinguish the settings [Seuf2001]:

Page 3: A Methodology to Compare and Adapt E-Learning in the Global Context (Pawlowski & Richter 2008)

A Methodology to Compare and Adapt E-Learning in the Global Context 3

• Interaction mode: From face-to-face to computer mediation • Communication form: temporary groups vs. permanent communities • Cultural context: similar vs. diverse cultural geography

Within these dimensions, cultural influence factors determine a variety of design decisions, e.g., regarding didactical or communication design. COLLIS distinguishes 19 design dimensions which are influenced by cul-ture [Coll1999]:

• Group aspects: Group size, member proximity, task type - in relation to

software systems supporting group collaboration • Pedagogical aspects: Pedagogic philosophy, subject area disciplines,

deep and surface learning, horizontal and vertical communication, re-sponsibilities of learners and instructors, teaching-styles, student behav-iors

• User interface: Language, visual aspects of the user interface (colors, icons, symbols), human-computer interaction

• Technological aspects: Infrastructure differences, access differences, technology-skill differences

• Institutional aspects: requirements for examinations, time-tables for course participation, prerequisites for courses, accreditation require-ments, locations for course participation It can be observed that most research in this field is based on generic

cultural models, such as HOFSTEDE [HoHo2005], HALL [HaHa1990] HENDERSON [Hend1996] and finally also TROMPENAARS & HAMPDEN-TURNER [TrHa2006]. All of those models define abstract dimensions showing cultural stereotypes and classifications. Their purpose is making cultures classifiable and comparable for determining differences or distin-guishing attributes of individuals. However, they are too generic to be use-ful for concrete design decisions in the E-Learning development process.

Therefore, specific models have been developed, refining those factors for the field of teaching and learning. As an example, HENDERSON defined a generic model for the field of multimedia teaching by using 14 dimen-sions [Hend1996]. Finally, although presenting different views, most mod-els contain dimensions which directly correlate with the 5 dimensions of the Hofstede model [HoHo2005].

Although the model of Henderson defines clear influence factors for the field of multimedia-teaching, it does not help finding concrete design deci-sions to adapt E-Learning for a specific context. This problem is addressed generally for E-Learning by EDMUNDSON [Edmu2007] and specifically for

Page 4: A Methodology to Compare and Adapt E-Learning in the Global Context (Pawlowski & Richter 2008)

4

hypermedia learning systems by KAMENTZ [Kame2006]. Even though those models seem to guide through the design and development process, they do neither rely on empirical evidence, nor do they provide compara-ble, validated, operationalized factors which could be used for automatic adaptation to a given context.

Summarizing existing models, it can be stated that the influence factors on E-Learning are not yet fully understood. Even though there are several models to explain cultural influence factors, there is no methodology to compare cultures based on operationalized factors and to validate those for E-Learning.

Adaptation to Context and Culture

To overcome the identified problems, we have developed a dynamical process to identify, validate, and use context influence factors for E-Learning within the process of adapting e-learning to a new context. The context of E-Learning in our meaning contains every influence factor on learning scenarios which cannot be influenced in the design process (such as cultural aspects). Therefore, we identified the main adaptation steps to fit learning environments to new contexts and identified factors influenc-ing this process (for a full description of the model, see [RiPa2007]).

Adaptation of E-Learning environments / scenarios means that (exist-ing) learning objects or scenarios are modified for usage in a new context. This adaptation process can differ in the degree of adaptation needs: from minor adaptation (e.g., changing media formats) to a full re-authoring (e.g., translation, adaptation to a different culture) [GüGM2004]. The ad-aptation process consists of five phases (Fig. 1):

• Search: In this phase, actors search for useful learning objects, e.g. in a

learning object repository or a knowledge base. • Validate Re-Usability: As a first step, the (originally intended) context

and the new context are compared, e.g., using similarity comparisons and recommender systems.

• Re-Use / Adapt: In this phase, the learning scenario is retrieved and changed. Typical scenarios include re-using scenarios for a new purpose or context (e.g., from Higher Education in the US to corporate training in Germany).

• Validate solution: In this phase, it is tested how the changed learning scenario fits the needs of the new context.

Page 5: A Methodology to Compare and Adapt E-Learning in the Global Context (Pawlowski & Richter 2008)

A Methodology to Compare and Adapt E-Learning in the Global Context 5

• Re-Publish: Finally, the new learning scenarios are re-published and shared with the new user community.

Fig. 1 The Adaptation Process

Generally, it is necessary to identify context influence factors 1) on a granularity which can support design decisions when adapting learning scenarios to a new context and 2) is machine-readable for (semi-)automatic adaptation. To compare and analyze the context of learning scenarios, we defined a common language, i.e., a specification to represent the context respectively influence factors.

Our metadata specification for this purpose contains the following classes and aspects [RiPa2007]:

Fig. 2 The context of learning scenarios [RiPa2007]

Page 6: A Methodology to Compare and Adapt E-Learning in the Global Context (Pawlowski & Richter 2008)

6

In the center, the learning scenario is illustrated with its contextual ele-ments (CE). Those elements directly or indirectly influence a learning sce-nario. The context blocks (outside the circle) represent typical influence types (impacting the learning scenario). Each context block consists of re-lated sets of various context metadata and related attributes. The data structure description is defined to be used within a context metadata data-base, needed for our adaptation process. The following table shows sample elements and formats of the metadata, focusing on culture-related influ-ence factors of the learner with sample values of our application scenario (German and Korean Higher Education). The chosen elements represent significant differences between our focused countries.

Table 1: Sample Context Metadata for Learners

ID Attribute Germany Korea CM10030 Meditation Model Understanding,

reflecting Memorizing, reproducing

HAM10005 Expectable Group Behavior

Group members are emancipated and

expect cooperation

Group members search group leader, who defined the group's

opinion HAM10001 Ability to stand

Critics Open critic is possible Direct critic often seen

as offending With the presented approach it is possible to identify and represent in-

fluence factors for the adaptation process and to improve design decisions in this process. Even though the model is based on previous research, some aspects are not yet fully understood. Therefore, we consider this model a basic specification which is dynamical and evolving by validations and experiences.

Test Methodology

Most adaptation models, such as [Edmu2007, Kame2006] in the field of E-Learning, are based on models of other domains (see chapter 2) or on pure assumptions. Therefore, we have developed a test method for valida-tion of our influence factors, specifically of those related to the learners. It serves two purposes: On the one hand, it shall be used to validate and im-prove the meta-data specification. On the other hand, it can be used by e-learning developers: Before the adaption as a user and context analysis, and after the adaptation for determining the success of their solution. As an example, the test method could be used in a prototyping approach to test

Page 7: A Methodology to Compare and Adapt E-Learning in the Global Context (Pawlowski & Richter 2008)

A Methodology to Compare and Adapt E-Learning in the Global Context 7

specific groups in a given setting. Whereas the full test methodology con-tains far more influence factors, this chapter focuses on specific cultural, learner related aspects.

In previous research [RiPa2007] we have found out a set of around 170 parameters, which basically influence learning scenarios and can cause changing needs when a course shall be adapted to a new context. A lot of those influence factors, e.g., those related to the technological infrastruc-ture or the legal system, can be collected rather simply (because the data are obvious or publicly available). Some influence factors, in particular those related to the learners, are neither easily available nor are the conse-quences clearly understood. Since especially those factors cause costly changing processes, understanding their impact is necessary to justify the related changing tasks in the adaptation process.

Test Design Our test method confronts students with different learning situations.

Data, concerning their reactions, learning success and personal views are being collected. The learning situations focus on cultural aspects and dif-ferences [RiPa2007]. Depending on the culture, some differences may be accepted by the students and have no influence on their learning or under-standing process. Others, with significant influences may disturb their suc-cess.

In our method, we distinguish two phases: 1. Exploration: In the first phase, we run first tests to confirm potential

influence factors and to discover new aspects. 2. Validation: In the second phase, we focus on an in-depth qualitative

and quantitative analysis to validate the harmonized influence factors. This method mix will lead to different results improving our metadata

specification and to a clear analysis tool for developers of international E-Learning. In the long term, we expect from our experiments to show the impact of certain learner-related, cultural-dependent influence factors. It shall help developers within the evaluation phase to determine weather their adaptation-efforts have been successful or further steps have to be taken. Additionally new culture-specific data for context-metadata, which later on may be reused by the developers, can be collected.

Case Study: Applying the test In the following, we describe our test in detail. The experiment is em-

bedded into a regular course “Quality Management and Evaluation in the

Page 8: A Methodology to Compare and Adapt E-Learning in the Global Context (Pawlowski & Richter 2008)

8

Higher Education” as part of an E-Learning Master program in Business Information Systems at the University of Duisburg-Essen. For the German students, this course is part of their degree program, for the Korean stu-dents a certificate is issued. This case was chosen as the issue of quality management. On the one hand, it is directly related to e-Learning. On the other hand, more domains than just the field of information technology are addressed so that also students of other fields of study can participate in the experiment. Additionally, the course is designed in a way that no sub-ject-specific preconditions are required.

For the experiment, we choose 15 students each from South Korea and Germany. The number of tested students must be well manageable and monitorable for a single researching tutor. As prerequisite for the experi-ment, the participants must take part in a master program and have a minimum level of English language skill, so that the initial knowledge of all students is comparable.

The first test run (planned for March until October 2008) will be the ex-ploration phase in a prototype setting. After this, the validation phase starts, in which the results are analyzed and test refinements undertaken.

First of all, we defined several test cases about influence factors to be

integrated into the course. The course was translated into English as a common language and restructured, so that different test cases could be separately applied and monitored in different course sections. This was necessary to achieve results which can be evaluated as clear as possible.

The course is presented in English language. The main idea behind the

experiment is to confront students with learning situations which are dif-ferent to their used ones and may cause conflicts for the students influenc-ing their learning success. The typical German teaching styles keep con-served in most parts as this defines the differences (possible conflicts) which the Korean students are confronted with. Single parts of the course are designed different to this so that also reactions of the German students can be tested in some cases. A consequent confrontation of the German students with Korean educational styles will be implemented within a later test.

In our first experiment, the test scenario primarily is designed to test the

reactions of Korean students on a German learning environment. With the current experiment design, in most cases no conflicts for the majority of German students are expected (besides explicit tests causing conflict situa-tions). Primarily, Korean students are monitored. Nevertheless, also Ger-

Page 9: A Methodology to Compare and Adapt E-Learning in the Global Context (Pawlowski & Richter 2008)

A Methodology to Compare and Adapt E-Learning in the Global Context 9

man students are monitored in each test case to verify that the used teach-ing style truly can be considered as “German” and to ensure assumptions regarding German behavior, learning strategies and way of understanding.

As methods to gather data, communication protocols (e-Mail, forum,

chat), interviews, questionnaires, examination results, the results of two practical tasks (both, group and individual work) and the results of the fi-nal examination will be taken into consideration. Each step is documented to be included in the second phase of this experiment, the validation phase (see below). We expect to improve the test scenarios and to formulate hy-potheses for quantitative analyses in future experiments. The test finally shall be reproducible and repeatable in any environment by any person to achieve comparable results for a variety of settings, in particular for differ-ent cultures.

In the following, a list of planned test classes is shown as an overview.

1. Test class 1: How far are students able to apply learned methods to

problems? - 4 test cases - 2. Test class 2: Do language styles influence the depth of understand-

ing? - 2 test cases - 3. Test class 3: Do the students in each group simply memorize infor-

mation or do they try to reflect it? How do they deal with unused presentations of contents? - 1 test case -

4. Test class 4: How do the students build groups and which structure can be monitored within the groups? - 1 test case -

5. Test class 5: How is collaborative group work practiced? What is the output? - 1 test case -

6. Test class 6: What kind of feedback is preferred and how do the stu-dents react on different kinds of feedback? - 1 test case -

7. Test class 7: Do the concepts of guilt and shame (west / east) have influence on the willingness to accept plagiarism as task-solution? - 3 test cases -

8. Test class 8: How is the relationship to authorities and do students put the learned contents into question? - 2 test cases -

9. Test class 9: What kind of working style do the students have and how do they react on uncertainties? - 5 test cases -

10.Test class 10: Which teaching style is preferred by the students and how do they react when it is not met? - 1 test case -

Page 10: A Methodology to Compare and Adapt E-Learning in the Global Context (Pawlowski & Richter 2008)

10

11.Test class 11: How do Koran students react when they are con-fronted with a German examination situation, which includes a lot of conflicting parameters (stress test)? - 1 test case -

Those test classes can be used to structure our test method, as well as to structure the course, incorporating them into different modules. Using this structure, tests can also be used in development processes within a proto-typing phase.

In the following, two detailed test cases are described as an example of the above mentioned test classes. We focus on the test cases in the explora-tion phase.

Test class: Relationship Method / Problem (Test class 1)

Test 1: Strictly use German way to transmit method-knowledge: Do not relate concrete methods to concrete problems Task: Students have to decide themselves which solution they choose and discuss the decision by evaluating other methods attached to this situation. A single correct decision will not be possible Expected result: German will solve the task but maybe argument wrong – no conflict; Conflict for Korean because they have to decide which solu-tion is better what maybe means that the teacher taught ineffective meth-ods Evaluation method: Test - by analyzing the results (do they fit the task, counting numbers of result classes: correct, semi-optimal, wrong, other); questionnaire after-test (experiences of the students – do the Korean stu-dents report more difficulties concerning the choice? Result of related questionnaire aspect is Boolean, if more difficulties: yes - else, no)

Test 2: Provide a semi optimal solution for a concrete problem Task: Students are asked to reproduce a method by solving at a similar but not equal (semi-optimal solution) example. Discuss the solution. The model for the solution will fit the task nearly complete but an aspect shall make the decision questionable Expected result: German will point out that the solution is semi-optimal and consider the task as not satisfying because they do not have the chance to make it better. Koreans will do the task exactly as demanded and accept the solution without putting it into question Evaluation methods: Test - by analyzing the results (is the method 1:1 applied to the problem? Result of related analyze is Boolean: if more methods are applied 1:1, yes - else, no; questionnaire after test (student’s

Page 11: A Methodology to Compare and Adapt E-Learning in the Global Context (Pawlowski & Richter 2008)

A Methodology to Compare and Adapt E-Learning in the Global Context 11

experience – did Korean students experience more problems than German students – result Boolean: If more problems experienced yes, else no;)

Test class: Authorities - respect to teachers, authors and tutors (Test class 8) Test 15: In particular within the practical work, test and examination phases, the students will be required to put the learned contents into ques-tion and discuss their usefulness. Expected results: The Korean students may have problems in solving the task in an adequate way because criticizing the person of authority is a sign for being respectless. At least the pressure of examination may require them to even do so, when their culture demands them not to do it. Evaluation methods: Comparing results within and between the cultural groups and evaluation through questionnaire. Related questions are whether the Korean students challenge the authority of the author and in which way. The first part has a Boolean output (if conflict, yes – else, no); the second part cannot yet be categorized because the output form is un-known. Ideally a scale can be defined which has the same structure for both countries. Then the values within the groups are concluded and com-pared.

For this initial test run, we plan to find out whether our test-cases are

sufficiently and clearly defined and whether the results can be considered as meaningful for a later statistical analysis and deduction to generally de-scribe national differences.

Validation Within the exploration phase, we mainly use qualitative methods in lack

of a deeper understanding of the possible reactions and impact types. In the validation phase of the method, quantitative methods are used to prove hy-potheses derived from the first phase.

In the validation phase, we will run the same test cases with a more quantitative-oriented design. Based on the results of the exploration phase, the same aspects are addressed in the validation to test similar objectives dependent on situations and groups. Hypotheses and conditions for their significance are defined based on the results of the exploration phase. This will be done in the first repetition of the experiment after the refinement. However, we still use a method mix. Qualitative methods are included to monitor potential changes regarding the definition of related context-metadata.

Page 12: A Methodology to Compare and Adapt E-Learning in the Global Context (Pawlowski & Richter 2008)

12

In the cases where quantitative analyses are possible and useful, in par-ticular the differences between the groups are analyzed but also the com-munities within the groups. Our test methods focus on small groups with mostly unknown distributions. Commonalities within the groups may al-low careful conclusions on culture-related behavior. A complication in the use of statistical methods in this concern are the facts that the number of samples is very small and that the samples within one group not necessar-ily can be considered as being (statistically) independent from each other. The last problem results from the fact that the students shall communicate with each other. When a student for example mentions a problem with the course in the public, it can be expected that others may (partly) adept this view and so afterwards will not state their personal view anymore but a combination between both. Nevertheless, the data gathered from both groups seen as total (2 sets of data) indeed can be considered as being in-depended because the Korean and the German students will not have con-tact to each other.

As a result of the second phase, we expect results regarding 1) the valid-ity of the metadata specification, 2) insights into culture related influence factors, and 3) practical recommendations for developers in the interna-tional context.

Conclusion

In this paper, we have shown an approach to adapt E-Learning to a global context. Based on a metadata approach, the adaptation process will take a variety of cultural / contextual factors into account. To determine the impact and validity of those factors, we have developed a test method. This method can be used for different purposes: to compare learning sce-narios, to validate and extend metadata, and, more generally, to analyze settings within a development process.

As a next step, the empirical results will be used to improve our adapta-tion approach and to enhance the test method. We propose to use the method in cross-cultural settings in order to obtain comparable data. We will furthermore cooperate with different projects developing a broad base of data and interpretations in this field to achieve our main goal: to en-hance and optimize the adaptation process and enable re-use of learning scenarios in a global setting.

Page 13: A Methodology to Compare and Adapt E-Learning in the Global Context (Pawlowski & Richter 2008)

A Methodology to Compare and Adapt E-Learning in the Global Context 13

References

[ChRo2004] Cherry, S., Robillard, P.: Communication Problems in Global Software De-velopment: Spotlight on a New Field of Investigation, ICSE, 2004.

[Coll1999] Collis, B.: Designing for differences: Cultural issues in the design of WWW-based course-support sites, British Journal of Educational Technology. 30(3), 1999.

[DaMa2001] Dafoulas, G., Macaulay, L.: Investigating Cultural Differences in Virtual Software Teams, The Electronic Journal on Information Systems in Developing Countries EJISDC 7(4), 2001, pp. 1-14

[Denm2003] Denman-Maier, E.: The Integration of Cultural Diversity into Knowledge Management and eLearning Systems, Proceedings of I-KNOW ’03, Graz, Austria, July 2-4, 2003.

[Edmu2007a] Edmundson, A.: The Cultural Adaptation Process (CAP) Model: Designing E-Learning for Another Culture. In: EDMUNDSON, A. (ed): Globalized E-Learning, Cultural Challenges. Idea Group, U.S.; 2007, pp. 267-290.

[GüGM2004] Gütl, C., Garcia-Barrios, V., Mödritscher, F.: Adaptation in E-Learning En-vironments through the Service-Based Framework and its Application for AdeLE. In Richards, G (Ed.): Proceedings of World Conference on E-Learning in Corpo-rate, Government, Healthcare, and Higher Education. Chesapeake, VA: AACE; 2004, pp. 1891-1898.

[HaHa1990] Hall, E. T.; Hall, M. R.: Understanding cultural differences. Yarmouth, ME: Intercultural Press; 1990.

[Hend1996] Henderson, L.: Instructional design of interactive multimedia: A cultural cri-tique. Educational Technology Research and Development, 44(4); 1996, pp. 85-104.

[HoHo2005] Hofstede, G.; Hofstede, G., J.: Cultures and Organizations. Intercultural Co-operation and Its Importance for Survival. USA, revised and expanded 2nd edi-tion, McGraw-Hill Publishers; 2005.

[Kame2006] Kamentz, E.: Adaptivität von hypermedialen Lernsystemen. Ein Vorgehens-modell für die Konzeption einer Benutzermodellierungskomponente unter Be-rücksichtigung kulturbedingter Benutzereigenschaften, Dissertation, Universität Hildesheim, 2006.

[Karo1998] Karolak, D.W.: Global Software Development: Managing Virtual Teams and Environments. Los Alamitos, IEEE Computer Society, USA, 1998.

[KeKR2002] Kersten, G.E., Kersten, M.A., Rakowski, W.M.: Software and Culture: Be-yond Internationalization of the Interface, Journal of Global Information Man-agement, 10(4), 2002, pp. 86-101.

[Kruc2004] Kruchten, P.: Analyzing Intercultural Factors Affecting Global Software De-velopment – A Position Paper, ICSE 2004.

[Seuf2001] Seufert, S.: Cultural Perspectives, In: Adelsberger, H.H.; Collis, B., Pawlowski, J.M., (Eds.): Handbook of Information Technologies for Education and Training, Berlin et. al.: Springer, 2001.

[TrHa2006] Trompenaars, F.; Hampden-Turner, C.: Riding the waves of culture: Under-standing cultural diversity in business. Nicholas Brealey Publishing; first pub-lished 1997, reprint from 2006.

[RiPa2007] Richter, T.; Pawlowski, J. M.: Adaptation of e-Learning Environments: Deter-mining National Differences through Context Metadata. Proc. of KCTOS, Vi-enna, Dec. 2007.