Written by: Ivan Shotlekov, Vanya Ivanova, Cor Koster Introduction
The knowledge of English or, for that matter, of any other foreign language is not as widespread in Central and Eastern European countries as in the “old” EU countries, thus putting companies and institutions there at a disadvantage in their dealings with foreign counterparts. Evidence for the limited knowledge of foreign languages in, for instance, Hungary is provided by Koster and Radnai (1997), who carried out a survey of foreign-language knowledge in southern Hungary. This revealed that many businessmen showed a keen awareness that the volume of their business may depend on a knowledge of foreign languages: 63.8% stated that they expected an increase in business with a better command of foreign languages. As many as 14.1% of the companies, especially SMEs (Small and Medium-sized Enterprises), conceded that they avoid foreign markets because of language problems. This fact is particularly intriguing, as it points to a very serious obstacle to internationalisation, so important for the small and medium-size companies in Central and Eastern Europe.
English, of course, is the language most frequently used in international business. There is abundant evidence that English is becoming a global language, without which doing business would hardly be possible (McArthur, 1996; Crystal, 1997; Graddol, 1997). But there is also evidence that in some countries and in some fields of business there is an increasing need of other languages, too (Koster and Radnai, 1997; Hagen, 1999; Huhta, 1999).
The fact that many SMEs experience “needs” or “shortcomings” in this area raises the question what they do, or can do, about the matter. Few companies have a full-fledged “foreign-language policy”, with a strategy how to address the language issue in the short and medium term, let alone the long term. In most cases a perceived “need” leads to employees being sent to a language school, where in most cases they get “general English” or “business English”. Or they attend in-company classes, where they may even get a “made-to-measure” course. Sometimes the company pays for the foreign-language classes, in money, in time, or in both. Sometimes employees have to care of their classes themselves. One interesting case is an example from Hungary, where in the early 1990s a Dutch bank took over a Hungarian one. The new Dutch managers noticed that the Hungarian staff knew virtually no foreign languages at all, which made them develop a simple but very effective “foreign-language policy”: all staff members were told that they had to become proficient in at least one foreign language in two years” time, that the management was not paying for these classes, that the lessons had to be taken in the employees” own time, and that if they did not manage to speak a foreign language at the end of the two-year period, they would be sacked. This approach turned out to be very motivating… (Koster and Radnai, 1997:34). An even more brutal way is simply firing people and hiring new staff who already have a certain proficiency in a foreign language. A much more humane way is having someone carry out a language audit.
A language audit is an investigation of the language needs of a particular company, resulting in a report outlining what action the company can undertake to increase the language competence of its employees, thereby increasing the possibility of contacts with foreign clients. It is mainly used for two purposes:
- to help a company develop a foreign-language policy,
- to collect data which enable a language school to develop a customised course for individual employees, or for specified groups of employees.
The latter is called for when a company wants a made-to-measure course for its employees; otherwise the language institute or the person who has to teach the course will not know what sort of material to use, what level to start at, and what skills to concentrate on for his/her specific students.
The former is much more general, covering the whole process from identifying the main problem areas in the use of foreign languages in the various departments to planning how to overcome possible shortcomings, for instance by requiring certain staff members to attend language courses or hiring language experts. Sometimes, a language audit fulfills both functions.
Language audits are at the centre of a Leonardo da Vinci programme of the European Community, running from November 2001 to April 2004, called “LATE” (Language Audits – Tools for Europe). The specific aims of the project are to:
- develop diagnostic tools for language audits, enabling enterprises, particularly SMEs and public authorities, to identify their communication needs and plan the necessary language training courses for their employees;
- develop ESP language teaching materials, on the basis of actual audits made within the framework of the project. The language materials are aimed at public authorities, especially in local government institutions, but also at SMEs involved in or interested in expanding business across borders, and will be developed in order to familiarise them with the kind of formal English that is used in “European documents” (country-specific or EU information material, rules and regulations – for instance, import and export requirements, tenders, applications for funding, etc).
The participants in the project are 16 organisations from 7 countries: a multi-layer and multi-player mix of universities, teacher training colleges, SMEs and government organisations (at county level, city level and district level) in seven countries: the Netherlands, Hungary, Bulgaria, Greece, Great Britain, Poland and Ireland. Co-ordinator is Taalcentrum-VU, the Free University Language Centre, Amsterdam, the Netherlands. Further information on the project can be found at http://rrbv.nl/LATE/
A tool for language auditing
The LATE project has developed a tool for carrying out a language audit, to be used by specially trained language auditors. It consists of a detailed questionnaire, a self-assessment and a vocabulary test. In this paper we look mainly at the relation between self-assessments and the test results, and discuss the findings of the try-outs in BG, HU, GR, NL and PL.
We will be very brief about the questionnaire. It was developed on the basis of suggestions made in the literature (for instance Reeves and Wright, 1996) and – especially – on the basis of the experiences at the Taalcentrum-VU, Amsterdam. This Free University Language Centre has been providing foreign language courses to SMEs since 1988. The aim of the questionnaire was to acquire as much pertinent information as possible, in an efficient a way, to be able to identify in as detailed a way as possible the job-related tasks that have to be carried out in a foreign language by individual staff members.
Part of the questionnaire was a self-assessment. Before we describe how we asked respondents to assess their own proficiency, a few words on the validity of self-assessments are needed.
Research on self-assessment has had two main objectives. As mentioned by Finch (2001), the first aim includes ‘the investigation of possible ways of realising the goal of learner participation in matters of assessment and evaluation’. The second concerns ‘the investigation of the degree to which self-assessment instruments and procedures yield relevant and dependable results’. Even though the validity and reliability of self-assessment remains a moot field, there is evidence that learners can make satisfactorily accurate judgements of their own performance. As is pointed out by Coombe (2002), it is recognised now that learners are able to provide a meaningful contribution to the assessment of their performance and that this assessment can be valid.
In the part on self-assessment, our respondents had to indicate what they can do with the language, thus indicating what their “level” is. The “can-do” statements are based on and correspond to the levels described in the CEF (Common European Framework of Reference; http://coe.int/portfolio/documents/0521803136txt.pdf). Subjects were asked to assess themselves as to the four skills – reading, writing, speaking and listening – on the basis of statements such as:
|CEF||Reading||Please tick highest level|
|A1||1. I can understand some words and very simple sentences on familiar topics.|
|A2||2. I can understand very short, informative texts on topics which are of personal interest or which deal with everyday matters.|
|B1||3. I can understand articles in newspapers and magazines, in which more complex sentences and words are used.|
|B2||4. I can read specialist articles which relate to my field.|
|C1||5. I can read with ease virtually all forms of the written language including specialist articles even when they do not relate to my field.|
The responses were then compared with the results of the same subjects on a vocabulary test.
The vocabulary test consisted of 40 multiple-choice items, carefully selected in accordance with the assumed lexicon possessed at the different levels (Silo/Taalunie, 1996:11-19), which described a standard for the number of words one knows (receptive lexicon) and the number of words one can actively use (productive lexicon) in situations in which a foreign language is used. The (receptive) lexicon of the people whose skills match with those of level 1 should consist of 1.000 words; level 2 has 2.000 words, level 3 includes 4.000 words, level 4 goes up to 8.000 words, and at level 5 one has 16.000 words or more. The size of one’s lexicon can give an indication of one’s proficiency in a foreign language.
One obvious question in this respect may be: why use a “simple” vocabulary test, instead of a more elaborate test in which all the skills are represented? The answer is simple. In all kinds of research, the correlations between vocabulary and general proficiency are quite high, usually in the region of .65. Moreover, such a test can be done rapidly and on a large scale, in writing. Finally, if one needs a test which is suitable for all levels, without making it too long, a vocabulary test is the quickest way to differentiate among people with different levels. Our vocabulary test only requires seven minutes. And, as everyone knows, in business “time is money”.
Of course, we are aware of the lack of face validity of such a vocabulary test. People who have to fill it in may ask: “How can you say anything on the basis of this sort of test about my speaking abilities, or about my listening competence?”. In fact, it can, precisely because the correlation between size of vocabulary and language proficiency is at least as high as between subtests in any composite test that addresses all the skills separately.
The item list can be divided into five parts, which correspond with the five levels: items 1-8 account for level 1, items 9-16 for level 2, items 17-24 for level 3, items 25-32 for level 4 and items 33-40 for level 5. The items in the list have been selected from a well-known word frequency list (Caroll, Davies and Richman, 1971) in such a way that items 1-8, which match with level 1, have been selected from the 1,000 words highest in ranking in the word frequency list, i.e. most frequently used in English. As the test progresses, the items become less high in ranking in the word frequency list.
q a three dimensional shape with six surfaces which are all the same size
q a shape consisting of a curved line and every part of the line is the same distance from the centre of the area
q a shape with three straight sides and three angles
q a shape with four equal sides and four corners that are all right angles
q every now and then
q a severe or trying experience
q an agreement between two or more parties
q verdict of a jury in court
q a statement made to the public and media
Model relation between self-assessment and proficiency level.
For comparing the scores on the vocabulary test with the self-assessments, we took the mean levels of the four skills as indicated by the respondents, rounded off to integers (“self-assessment”). We plotted these against the results of the vocabulary test (“test”). Ideally, anyone who regards himself or herself to be, for instance, at level 4, should also get a score which assigns him to level 4. Thus, all the scores “should” be in the diagonal yellow cells in the graph below (Fig. 1). If, for instance, someone thinks s/he is at level 4 but the test reveals that s/he is at level 2, s/he overestimated his/her level. Conversely, if someone scores 37 items correctly, which put him/her at level 5 in the test, and s/he thinks that the fitting level is 3, s/he underestimates his/her level.
Fig. 1 Model: a perfect match between “self-assessment” and “test” should lead to all “matches” being in one of the yellow cells
As to the reliability of the test, Cronbach’s alpha indicated that it was very high (a=0.90). Almost all items had a positive item-total correlation; 83% had an item-total correlation of .20 or higher.
Over all subjects (N=119, in 5 countries: the Netherlands, Hungary, Bulgaria, Greece), there was a positive correlation between the scores on the vocabulary test and the self-assessments (r=.61, p<.001; explained variance 37.2%). However, subjects in the various countries differed in the way they assessed their own proficiency.
Fig. 2 presents the percentages of correct assessments. It shows that Bulgarians (46% correct) were clearly better than Hungarians (15% correct).
What is more interesting, especially from an intercultural point of view, is the degree in which the various nationalities underestimated or overestimated their own level. There turned out to be a significant difference: c2 = 14.87, df = 4, p<.05. See Fig. 3.
As Fig. 3 shows, especially the Hungarians (70%) tended to underestimate their level. In the case of the Greeks, the number of people who overestimated their own proficiency was about equal to the number of subjects who underestimated their level (about 30%).
In this paper, we introduced some aspects of the tool that has been developed in the Leonardo da Vinci project “LATE” (Language Audits – Tools for Europe). Looking specifically at the relation between self-assessment and proficiency level, we have seen that there may be a potential difference in the useability of this tool in various countries: people in different countries seem to have different perceptions of their own proficiency, with Hungarians and Dutchmen tending to underestimate their abilities more than people in other countries.
Try-outs in five countries have shown that our approach – combining self-assessment and vocabulary test – is useful, but we must say that, at its present stage of development, it should be used with caution. This is partly because, although a person”s vocabulary size is a very good indicator of his/her proficiency, more advanced – though not necessarily more reliable – methods of establishing a person”s proficiency are becoming available. A very interesting development, for instance, is offered by computer-assisted tests, some of which can be taken on the Internet. One of these is the test developed by DIALANG, a testing system “available [on the Internet] to language schools and other institutions which need to carry out placement tests or to gain a quick indication of individuals” levels of ability in any of six European languages”. Similar facilities are offered by COMMUNICAT and BULATS, with the latter focussing on “language relevant to the work place”. Yet, our simple 40-item vocabulary test is valuable; it is something that can be done quickly (in seven minutes) and scored in a short time; it is very reliable and can be done by people without access to the Internet.
As to the self-assessments, people may often be right in estimating their level, but in many cases they either overestimate or underestimate their level. A personal conversation in the foreign language between the respondent and the language auditor usually reveals in less than a minute to an experienced teacher/auditor how realistic a self-assessment is.
At this stage of the LATE project, it seems that the tool developed (an audit consisting of a questionnaire, a self-assessment and a vocabulary test) presents enough information to a qualified language auditor to enable him/her to develop a relevant training programme.
Of course, in this paper we have not been able to discuss questions such as:
- How do we train people to become “qualified language auditors”?
- How do we translate the outcome of such an audit into actual material and teaching?
As to the first question: within the framework of the LATE project, courses are already being given to people who want to become language auditors, in four countries: HU, GR, BG and PL. The audit course in Sofia, for instance, was attended by 26 people, 17 of whom duly handed in their assignment papers, i.e. audits. The result was 10 audits, as 12 authors were awarded a Certificate of Attainment, and 6 others received a Certificate of Attendance.
As to the second question – the translation of the findings of such an audit into actual teaching material, and the related question of how to teach the people involved – this is a topic we are still working on.
So far, little has been published on language auditing: one book only on the technique of carrying out language audits (Reeves and Wright, 1996) and some articles reporting on foreign-language needs in business (Koster and Radnai, 1997; Schopper-Grabe and Weiss, 1998; Hagen, 1999; Huhta, 1999; Weber, Becker and Laue, 2000). Nothing at all has been written on the process of translating the results of a needs analysis into specific teaching materials or teaching processes.
Because of the multiple-choice character of the test, a score of 10 could be achieved by pure guessing. Hence, although the 40-item test contained 5 groups of words with 8 items each from various frequency bands, all subjects with a score of 0-16 were assumed to have only level 1.