940
Views
0
CrossRef citations to date
0
Altmetric
Research Articles

“If we’re lucky, we recognise potential.” A study of admission criteria and entrance screening practices in public service interpreter training

ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon &
Pages 95-113 | Received 16 Jun 2022, Accepted 30 Oct 2023, Published online: 09 Nov 2023

ABSTRACT

The growing demand for language mediation across different domains of public service interpreting (PSI) poses a challenge for policymakers, stakeholders (institutional representatives, clients), and traditional interpreter education institutions. Alongside university-based interpreter education, different training formats have emerged internationally to meet the increased need for training. Based on a systematic review of extra-university training formats in Austria, this contribution presents the results of a qualitative follow-up study drawing upon data from semi-structured interviews with providers. The aim was to investigate whether entrance assessment procedures are employed by course providers when selecting course participants, and, if so, which they are and what motives lie behind providers’ decisions for particular admission procedures. Our study shows that while there is considerable commitment to offering customer-tailored courses to heterogeneous groups of trainees, the training formats differ and there seems to be little communication among providers regarding the establishment and adequacy of different formats for student selection.

1. Introduction

Screening and assessment practices, both entrance screening and formative and summative assessment, have been a prominent topic in translation (TS) and interpreting studies (IS). Assessment refers to the assessment of an individual’s achievement (Amato and Mack Citation2022, 464), as opposed to the ‘evaluation’ of the quality and outcome of a specific programme: it relates to assessing prospective students’ entrance qualifications (entrance-level assessment), students’ progress during training (formative assessment) and students’ competencies after training completion, and to the awarding of a degree or certification (summative or final assessment). Sawyer (Citation2004, 72) differentiates between entry-level assessment for ‘novice’ learners, ‘intermediate assessment’ for ‘advanced beginners’ and ‘final assessment’ for ‘competent’ learners.

The design of assessment at different levels has been regarded an important element of translator training and can be considered equally important for interpreter education (Hale Citation2007, 173–176; Corsellis Citation2008, 60–65). Both for conference and public service interpreter (PSI) training, some form of assessment will be required, irrespective of the concrete training format, even if the PSI training landscape is more multifaceted and PSI training is still being offered in the form of non-degree ‘skills-based courses’ or within language programmes (Mellinger Citation2021, 171–172): ‘Whereas more formal interpreting programmes can more easily screen potential students for interpreting aptitude, PSI courses embedded in language programmes or as non-degree-granting programmes are somewhat limited in their ability to homogenise their student population’ (Mellinger Citation2021, 179). While conference interpreter training mostly takes place at higher education institutions, often with a long-standing tradition in interpreter education, at MA level, with similar formats, and quality criteria, PSI training is more varied and also offered by non-traditional providers. Training formats have been influenced by the complex market conditions that govern PSI in many countries worldwide, such as little regulation and no standardised translation policies, lack of awareness, large heterogeneity with respect to users and service-providers and capital that is low in symbolic and often in economic terms as well. In many countries, and Austria serves as an example here, we still find a lack of recognition of the need for training. Training may not be viewed as necessary by some interpreters, as they are given assignments irrespective of having completed training, either due to the lack of national standards for interpreting or because there is no money to finance training or even pay interpreters (also see Iannone and Redl Citation2017, 125, and Pöllabauer Citation2020, 37). Even if training can be seen as the linchpin of adequate service provision, Hale’s observation that training remains ‘one of the most complicated and problematic aspects of Community Interpreting’ (Hale Citation2007, 162) still holds true for many countries. The array of training options continues to be diverse (Bancroft Citation2015), and training-related challenges can be summarised as follows: aside from the above-mentioned lack of recognition of the need for training (a), other training-related challenges include b) a lack of compulsory pre-service training for practitioners, c) a lack of adequate programmes and d) differences in the quality of training measures (Hale Citation2007, 163). In addition, differentiated instruction for heterogeneous classrooms with diverse student populations are needed (Mellinger Citation2021, 176–179), where greater attention should be paid to ‘student readiness’ and different learning profiles. Also, while assessment has been identified as an important element of training, assessment formats seem to be as diverse as PSI training. This assumption was supported by the results of a systematic study of PSI programmes with a focus on Austria (Pöllabauer Citation2020), which suggests that even while there is considerable commitment by training providers to offering customer-tailored courses to heterogeneous groups of students, there are diverse course formats and little communication among providers as to assessment procedures. Austria is an example of a country without a comprehensive national framework for the employment and the training of public service interpreters, or for national-level examination or accreditation procedures. While court interpreting is regulated through the Court Interpreters Act, which requires interpreters to undergo a court interpreter exam to serve as sworn and certified court interpreters, and the practices for employing interpreters for asylum or police procedures are currently being reformed (the Ministry of the Interior has recently established its own system of examining and accrediting asylum and police interpreters), other fields of spoken-language interpreting have no established mechanisms of quality control or accreditation (Kadrić Citation2019); sign language (SL) interpreting shows the highest degree of professionalisation, and SL interpreters are required to undergo a vocational test to become members of the Austrian Association of Sign Language Interpreters and Translators. There is, however, no comprehensive nationwide level system of accreditation such as, for instance, the UK Diploma in Public Service Interpreting (DPSI), and no specific professional association for PSI. Existing professional associations have been involved in vocational exams (court interpreter’s exam, SL vocational exam), but are generally not involved in interpreter training (except for the Association of Sign Language Interpreters and Translators, which has been involved in interpreter training initiatives). Professional associations have also been slow to focus on matters of PSI and have not played a major role in bringing together stakeholders in this field (see the contributions on Austria in Pöllabauer and Kadrić Citation2021; for SL interpreting see; Grbić Citation2023).

These developments suggest that any kind of assessment that is established by course providers, whether at entrance, formative assessment, or end-of-course level, will be established individually in-house and based on providers’ particular needs and educational goals, without having to comply with national testing and accreditation standards. This prompted us to take a closer look at course providers’ and educators’ standpoints on assessment through a qualitative interview study and focus on the procedures and criteria applied in different courses in our national context, some of which can be seen as ‘interpreter training’, with mostly a practice-oriented focus, and others, more broadly, as ‘interpreter education’ (Angelelli Citation2017, 32), embracing an academic approach and research-based elements.

2. Assessment and admission testing in interpreting studies

For this review of literature, we focus predominantly on IS, sketching lines of thought and findings in the literature on interpreter training.

2.1. Assessment in conference interpreting

The growing demand for international communication after WW2 has shaped the establishment of conference interpreting as a profession, with also an increased demand for interpreter education, leading to the emergence of a considerable number of training programmes over the last few decades. In particular, the efforts of the AIIC Training Committee and the European Master in Conference Interpreting (EMCI) in defining quality criteria have had a lasting impact on training and proliferated research on screening and selection procedures (Russo and Salvador Citation2004, 410; van Dam and Gentile Citation2021). Conference interpreter education is mostly university-based, for languages considered to be in high demand, and offered predominantly at postgraduate level with an undergraduate degree being a requirement for taking aptitude tests (Russo Citation2011, 12; Setton and Dawrant Citation2016, 106). Admission testing through different screening procedures has become an integral part of many conference interpreter training programmes and can be regarded as a gatekeeping mechanism serving two main purposes: one serves the practical necessity of student population control, ensuring that the number of participants does not exceed the human resources and financial restrictions of the training institutions (Russo Citation2011, 6), and the other fulfils the wish to distinguish between candidates with a high potential of succeeding as conference interpreters, that is to say between candidates possessing the necessary ‘aptitude’, or ‘interpreter-readiness’, and ‘teachability’ (Russo Citation2011, 7) and those who are not as likely to succeed. Ultimately, the aim is to create homogenous groups of trainees to ensure a smooth progression throughout the course, which presupposes that entrance exams are eliminatory in nature (Moser-Mercer Citation1994, 58). Though the gatekeeping function of aptitude tests has been broadly recognised, views seem to diverge on what exactly constitutes an objective, reliable and valid aptitude test (Amato and Mack Citation2022, 464; Setton and Dawrant Citation2016, 133). Despite much progress, it remains challenging to determine which screening procedure fits best which purpose and training format and, consequently, how to measure every component of the overall skill set and qualities a trainee should, in theory, possess (Kalina Citation2000, 13; Timarová and Ungoed-Thomas Citation2008, 30). These questions remain: what exactly should be tested, how admission tests should be administered and how effective the design of the tests proves to be (Pöchhacker and Liu Citation2014, 2; Russo Citation2011, 10). Though the importance of cognitive skills and the question of the predictive validity of certain test types are still very much at the forefront of scholarly inquiry, affective variables such as personality traits and motivation also seem to have received increasing attention as complementary components to psychometric tests, tasks related to interpreting and written tests aimed at determining, for instance, memory capacity, language level, general knowledge and communicative skills (Moser-Mercer Citation1994; Schweda Nicholson Citation2005; Setton and Dawrant Citation2016, 106). Personality traits and motivational variables, however, are difficult to assess formally (Timarová and Ungoed-Thomas Citation2009, 227; Russo Citation2022, 309).

2.2. Assessment in public service interpreting

In PSI, candidate selection also serves to identify applicants whose background and skills level presumably makes them most likely to match the training initiatives’ goals (Roda Citation2000, 105); nevertheless, entrance testing in PSI training has received less scholarly attention (Hlavac, Orlando, and Tobias Citation2012, 23). Based on experiences in UK training initiatives, Corsellis (Citation2008, 68) lists four main criteria that candidates should display when entering initial professional training: language competence as well as interpreting, learning, and professional potential. To screen for those criteria, she suggests a selection process that consists of several individual tasks. These include a self-assessment form for language skills, a formal selection procedure to test aptitude, featuring role-plays, sight translation and written translation, text production exercises and interviews in both languages.

When looking at case studies on PSI training initiatives, it becomes apparent that they touch in part upon these requirements when selecting candidates, for instance, the need for a certain level of proficiency in the host society’s language (Bergunde and Pöllabauer Citation2019; Hale and Ozolins Citation2014) or a minimum level of education (Delgado Luchner Citation2019). Only a few studies have discussed test constructs in more depth, however (van Deemter, Maxwell-Hyslop, and Townsley Citation2014).

Since linguistic proficiency in a minimum of two languages seems to be a widely acknowledged core skill for interpreting, testing of novice interpreters often envisages the evaluation of both languages involved (Wadensjö and Skaaden Citation2014). Skaaden (Citation2013, 38–9) discusses the selection procedure of a one-year training programme in Norway, which selects candidates upon successful completion of an exercise where utterances have to be repeated in the respective working languages and which tests candidates’ skills in both directions. In the context of a mixed-language cohort training course in Australia, Lai and Mulayim (Citation2010) report on an assessment format containing reading comprehension exercises, written essay assignments and an interview eliciting linguistic proficiency in both languages. Skaaden and Wattne (Citation2009) outline the testing format of an online interpreter course in Norway where applicants were tested for lexical knowledge and were also assessed upon their performance in a consecutive interpreting simulation to elicit oral skills. In her report on a screening tool for assessing interpreter readiness, Angelelli (Citation2007, 64–5) lists listening, processing, and speech production skills as abilities to be assessed among candidates and stresses the importance of assessing interpreting and linguistic abilities separately. The complexity underlying interpreting, particularly, brings to the foreground the evaluation of additional skill sets, which are often tested separately and result in a multi-component test construct. Gustafsson, Norström, and Fioretos (Citation2012) describe the entrance assessment format for a state-based basic interpreter training programme in Sweden that includes components seeking to assess applicants’ legal and societal knowledge of Sweden and translation skills from Swedish into the other working language, and interviews in the respective languages, used for testing linguistic proficiency and aptitude. Hlavac, Orlando, and Tobias (Citation2012) report on an intake test for a short interpreter training course in Australia, comprising over 30 questions, and screening applicants for their general interest in interpreting and motivation to undertake training as well as for their linguistic proficiency. The test sought to evaluate candidates according to four macro-criteria (listening, reading, writing, speaking) and note-taking by eliciting information about the applicants’ professional experience, educational backgrounds, motivation for undertaking training, language proficiency in English and their self-ascribed level of languages other than English, terminology and knowledge about the interpreting profession. Additionally, the test format included reading, writing, listening and note-taking activities, and the candidates had to complete a translation exercise and were also assessed on their pragmatic and communicative skills.

Generally, research on candidate selection in PSI training reflects a strong focus on testing the bilingual abilities of applicants to determine language proficiency. In some cases, programme providers employ performance testing (Angelelli Citation2007; Skaaden Citation2013; Skaaden and Wattne Citation2009) while in other cases, a questionnaire eliciting information about candidates’ language biography and self-ascription serves as a basis for evaluation. Overall, and this might be due to the individual focus of providers in training interpreters, traditional translation activities and sight translation rarely feature among selected tasks for test-takers, presumably due to their limitation in assessing oral and aural skills of potential trainees (Skaaden Citation2016, 9).

3. Research questions and methodology

Our principal research question was whether entrance criteria and screening procedures are in use in programmes that have been offered in our national context and if so, what kinds. This broad focus was broken down into the following three sub-questions:

  • What are training providers’ motives for establishing or dispensing with entrance screening procedures for the programmes they offer?

  • Which specific elements and components do these criteria include?

  • How is the suitability of these criteria determined?

To address these questions, we applied a qualitative research design and combined a content analysis of publicly available digital and print course curricula and course descriptions with in-depth interviews; for this contribution, we will only report on the interview data.Footnote1

To determine our ‘sample universe’ (Robinson Citation2014, 26), we defined the following inclusion criteria: a) spoken-language PSI programmes, generic and language-specific, that had been offered in Austria at least once in the previous decade (2011–2021), with a broad or a domain-specific (asylum, healthcare, police) focus, and b) for which publicly accessible information was available. We excluded mainstream university programmes, as these employ more standardised admission criteria. Sign language programmes were also excluded, as were courses with a focus on court interpreting, which is more extensively regulated, at least in theory.

Our initial selection of course providers was based on a systematic analysis of PSI courses in Austria (Pöllabauer Citation2020). Nine training providers met the above inclusion criteria (in alphabetical order; for details about the providers see https://dialogdolmetschendatenbank.at).Footnote2

  1. Dolmetschen für Gerichte und Behörden – Grundlehrgang (W-ULG)

  2. Dolmetschen für Gerichte und Behörden – Master-Upgrade (W-ULG (MA))

  3. Kommunaldolmetschen – Fortbildungsreihe für Dolmetscher*innen im Sozial- und Gesundheitsbereich (W-DIAK)

  4. Lehrgang Dolmetschen im Kommunalbereich für Laiendolmetscherinnen und Laiendolmetscher (R-BZ)

  5. Plus.Mehrsprachigkeit, LaiendolmetscherInnenlehrgang (L-Plus)

  6. Qualitätsvolles Dolmetschen im Asylverfahren (QUADA)Footnote3

  7. Universitätskurs Community Interpreting (I-UK)

  8. Universitätskurs Kommunaldolmetschen Aufbaukurs (G-UK2)

  9. Universitätskurs Kommunaldolmetschen Basiskurs (G-UK1)

Of these nine providers, two (University of Graz and University of Vienna) offer a basic training programme (G-UK1 and W-ULG, respectively) that, after successful completion, permits students to continue their training in an advanced-level course (G-UK2 and W-ULG (MA), respectively). These programmes were not analysed separately, as completion of the basic course is a prerequisite for admission to the continuation programmes.

The in-depth interviews were designed as semi-structured interviews aimed at both collecting facts and exploring the interviewees’ subjective perspectives. The interview guide focused on both curriculum design and entrance criteria. In what follows, we will only concentrate on entrance screening procedures.

We employed expert sampling as a form of purposive sampling. Interviewees were contacted via email by using gatekeepers. Per provider, at least one individual involved in training design was interviewed; in addition, we interviewed at least one person per provider involved in candidate selection, except for one institution (L-Plus). In total, we conducted nine digital in-depth interviews, with 14 interviewees, between March and May 2021. Five of the nine interviews were three-party group interviews, whereas the remaining three were one-on-one interviews. In two interviews (W-ULG and W-ULG (MA), and G-UK1 and G-UK2), the interviewees manage both the basic and advanced courses that are offered by these providers, and thus provided us with information on both courses in one interview. Our total corpus of interviews comprises approximately 460 minutes of raw data, with a minimum length of 18 minutes and a maximum length of 1.5 hours per interview. All authors were involved in the interviewing process. The interview guide was jointly drafted and extensively discussed to guarantee consistency in interviewing.

All interviews were recorded and transcribed with f4 in a ‘simple transcription’ format, and using the transcription convention suggested by Dresing, Pehl, and Schmieder (Citation2015, 27–30). For theory-based and data-driven coding (Kuckartz Citation2018), we used MaxQDA. Two team members served as main coders, who, in the first coding phase, jointly coded one 30-minute interview to improve transparency and coding consistency. In the second phase, they separately coded 215 minutes each. As a quality assurance measure, both coders revised the transcripts that were coded by the other. As a last step, two other team members counter-checked the coded transcripts and selected the relevant sections for this contribution.

The excerpts from the interviews presented below are a choice of representative examples that were informed by the results of our content analysis (see endnote 1), with a focus on admission processes and providers’ motives for establishing these.

4. Results

Of the nine courses, two university-affiliated providers offer both a basic and continuous training programme. Upon successful completion of the basic training (G-UK1 and W-ULG), candidates can move on to the advanced training programme (G-UK2 and W-ULG (MA)) without a further screening process (see 3). For the remaining seven programmes, a broad spectrum of approaches to candidate selection was identified. These can be divided into two groups, one applying mainly a pre-set range of entrance criteria (I-UK, L-Plus, QUADA, R-BZ, W-Diak) and the other introducing additional test elements for candidate selection (G-UK1, W-ULG). In the first group, three out of five providers (L-Plus, R-BZ, W-Diak) additionally assess applicants’ suitability when deemed necessary; for instance, they use short meetings to determine language skills; I-UK usually invites prospective participants to an interview which includes a self-presentation of up to one hour. Even though these three providers rely predominantly on pre-established entrance criteria, they argue that they would rather admit a candidate into training, even if not deemed entirely suitable, than risk course cancellation. Since they consider course completion to be beneficial for their participants’ personal and professional development, course providers rather focus on the strong suits of their training formats, highlighting their courses’ potential for empowering participants. One interview partner stresses that ‘basic certification’ can be crucial for graduates’ professional development:

One of the main tasks we have in the sense of empowerment is to support people so that they can better master their lives and activities. And in this respect, in my ideal view of empowerment, this course has actually been one of the guarantors of success for us over the last few years. (R-BZ_2, 254–258)Footnote4

As a migration and refugee background often results in a disadvantaged societal position, another interview partner emphasises that such courses may heighten participants’ awareness of which skills they are still lacking, and that interpreting might be a potential career path for their students, requiring, however, an increased degree of professionalism.

Well, and I think it’s important to understand that interpreting really is a potential profession. […] But nevertheless, I think it is important for people who have this quite natural multilingualism, which they are simply equipped with through life, that they also understand how professionally this activity can be carried out, and that one also looks at this as a future prospect. That they also understand that there is a connection between conference interpreting, which is simply really/where the performance standards are TREMENDOUS, and community interpreting. That this is ultimately the same activity again. So this transfer over and over again. Always this quick grasping, comprehending and making understandable. And I hope that this somehow also has a motivating effect. (W-Diak, 594–604)

In contrast to this rather inclusive approach, underlining empowerment and strengthening of potential the second group (G-UK1, W-ULG) can be identified as being at the other end of the spectrum, presenting more standardised selection procedures allowing admission to only the most suitable candidates into their training programmes; one of the interviewees once again stresses the importance of professionalisation, networking and an increased awareness of the specific challenges faced by their course alumni:

[…] they have really developed a kind of awareness of their professional status and of their profession. This went so far that they thought about founding a professional association and they had already made relatively extensive enquiries. And there were also groups, and I think they were in contact for a very long time/whether they are still in contact now/but they were also in very close contact for a very long time. So there has been a lot of awareness-raising in this professional field of CI. And vice versa, of course, the market has also changed through trained or at least qualified interpreters. They simply worked differently and that was also the purpose of this qualification measure, I think. (G-UK1, 1249–1261)

Interviewees of these university-affiliated providers critically reflect on the procedures they have established and emphasise their drawbacks: for instance, the high logistical complexity and time-consuming nature, strain on human resources, difficulties in finding suitable examiners, particularly for non-traditional languages, the need for extensive briefing of new examiners, also particularly for non-traditional languages where few(er) examiners with experience in examining and assessment are available, or the complexity of assessing social skills or capacity for empathy. One reason for this more critical stance might be that these interviewees are affiliated with academic institutions and have a strong TS background which might be indicators of a greater motivation to uphold academic and institution-related quality criteria.

4.1. Entrance criteria

In spite of the overall heterogeneity of candidate selection, all providers share common ground when it comes to establishing minimum requirements, that is to say, language skills and professional experience. In what follows, we describe the application procedures and minimum entrance criteria that are in place.

4.1.1. Criteria for application procedures

Standard application documents comprise CVs, letters of motivation, student records, references, and proof of identity. Other credentials are usually not requested, although university-associated training formats like W-ULG tend to demand higher standards for official translations of documents. One provider (QUADA) offers applicants the option to fill in a digital and self-administered self-assessment form to determine whether the chosen course meets their learning goals. No feedback is provided, suggesting that this questionnaire may not be equally helpful for all, particularly not for applicants with little training experience.

4.1.2. Language skills

All courses, except the two follow-up courses, established specific minimum requirements: two (L-Plus, R-BZ) only require basic or intermediate (B1) skills in German; four out of seven (G-UK1, I-UK, QUADA, W-Diak) require their applicants to provide B2 German level. Only one provider (W-ULG) explicitly demands proof of C1 proficiency in both German and the other working language(s). For all courses, the main language of instruction is German. Courses which offer language-specific training (W-ULG, W-ULG (MA)) also have units in which the trainees’ other working languages are used and practiced, and also, non-language-specific training offers may group trainees according to their linguistic backgrounds in specific course units and attempt to include language-specific exercises. Whether a course offers language-specific training will have an influence on the format of intake tests.

4.1.3. Professional experience

Professional experience as an entrance criterion has received less attention in the literature. In our sample, however, several providers (G-UK, QUADA, W-Diak, W-ULG) mention prior and ongoing professional experience as a relevant entrance criterion, ideally even in a specific domain (for instance, asylum proceedings in the case of QUADA). W-ULG accepts candidates that would not fulfil the minimum academic requirements if they are able to prove extensive professional experience as interpreters.

4.2. Meeting suitability criteria: transversal aspects of candidate selection

Most providers hold specific expectations concerning applicants’ soft skills, personality and intrinsic motivation. Acknowledging the fact that it may be difficult to differentiate between concepts such as ‘soft skills’ and ‘personality traits’, we nonetheless try to distinguish between these two concepts for analytical purposes. For this study, we defined soft skills in contrast to the necessary hard skills, in other words, technical and methodological competences such as language or interpreting skills, as desirable personal characteristics, abilities or competences (empathy, resilience, communication skills or stress management). The interview questions that focused on applicants’ personalities targeted the interviewees’ notions of ‘ideal’ candidates to potentially elicit their views on both soft skills and personality traits.

4.2.1. Soft skills

When elaborating on soft skills, interviewees mentioned conflict prevention, cultural mediation, (self-)reflectivity, the ability to manage boundaries and autonomously manage one’s learning progress and communication management skills as valuable assets. A recurring topic was trainees’ awareness of the multiple roles interpreters may have to assume, also reflected in the following quote:

Everything that has to do with social competence in the broadest sense, because I think the interpreter has also the role of a moderator and that was important to me: can the person also set priorities in the interaction? Can he or she also take a step back and give advice, or can he or she say: ‘Listen, be careful!’? […] So for me, the interpreter also has a moderator function; it has a lot in common with midwifery, that is, to bring things to light and unveil them. But this is often very difficult to achieve in a first interview. (R-BZ, 113–121)

Most of the providers acknowledge, however, that soft skills are too complex and wide-ranging to be assessed within the range of available entrance criteria or entrance test procedures, thus some providers refrain from assessing them at all. If soft skills are considered, they are being judged through informal personal or telephone interviews (I-UK, R-BZ) or oral entrance examinations (for details see below) and group discussions (W-ULG, G-UK). None of the providers in our corpus used a professional and established form of personality assessment.

4.2.2. Personality

The significance of role awareness was also a key issue when interviewees were asked about candidates’ personality traits. Other characteristics mentioned were resilience, flexibility, ethical decision-making, the willingness to learn, capability to cope with pressure, a sense of responsibility, communication and interaction skills and confidence in one’s own abilities. One interview partner describes the ‘ideal candidate’ as follows:

I think the ideal type is someone who shows a great willingness to learn and flexibility. So I guess ideally you should know and understand that there’s no such thing as a recipe and that you can also adapt your knowledge to new situations and readjust it. Above all, this person would be someone who has developed their own ethics. In other words, someone who knows when to say something, when to intervene and say: ‘Hold on a second, due to […] human rights concerns, something still needs to be interpreted!’ So someone who can also draw attention to the clients’ language when necessary. […] But also knows when it’s time to hold back. So to exercise restraint is also, for example, when a client WANTS more, that is, more advocacy or more support, how to be able to hand it over in an elegant way, saying: ‘I’m not here as an advocate’. (W-Diak, 782–792)

Whereas some providers expect future candidates to already possess these traits so they can hone them in training, one interview partner (L-Plus) surprisingly stresses the advantages of a lack thereof, referring to personalities displaying a strong will and drive to help others through interpreting. In this interviewee’s view, such individuals might benefit most from reflecting on their roles and responsibilities:

[…] What I have often come across are people with a tremendous will to help others, probably in the sense of a ’helpers syndrome’. I don’t know whether I find that ideal. In the work context, it is certainly not ideal; in the training setting, it is perfect. Precisely in those situations, the penny has dropped, in the sense of: ‘well, that’s not my job at all!’ That occurs much faster and much easier than with someone who perhaps has a more differentiated approach and clearer role conceptions. The ideal candidates for the training course are those who claim ‘I want to help as many people as possible and save the world’. Here you can tell the greatest difference and the learning effect is also the most significant. (L-Plus, 267–275)

The candidates’ reasons for applying and engaging in training referred to in the quote also resurface in applicants’ need for intrinsic motivation which is stressed as another prerequisite for training admission by some interviewees.

4.2.3. Intrinsic motivation

Several providers emphasise applicants’ willingness to actively participate in the training, not only for successful completion but also as a factor that is decisive for the screening process. Although applicants’ motivation did not seem to be the interviewees’ main priority, it clearly played a major role for providers to the extent that it served as a decisive indicator in cases where applicants did not meet or only barely met the formal requirements:

I have two particular candidates in mind. Both, in fact, did not meet the standard requirements. Younger people and without a university degree. One was a man, Iranian, I think. He was an interpreter. That was his dream job. He actually had interpreting experience and could prove the required period of four years, but just narrowly. But in the course and in training he was always one of the best. And he had already passed the court interpreter exam with flying colours. I actually heard that he was one of the best candidates in recent years. It is not only relevant what applicants already have and can prove on paper. If we’re lucky, we recognise potential. (W-ULG, 956–970)

This quote strongly underlines motivational aspects governing a candidate’s application in the selection process, while foregrounding the provider’s emphasis on the importance of perceived abilities and their potential for successful professional development in the long run. Examination scopes at entrance, formative and/or summative level that exclusively feature oral interpreting tasks at the expense of written and/or sight translation (as mentioned in section 2.2) are, at the same time, brought to the fore; this offers considerable potential for achievement for candidates with varying degrees of literacy. Testing procedures that encompass both written and oral translation activities in the same exam might, conversely, put candidates portraying a high degree of orality and a lower degree of written language skills at a disadvantage.

Some institutions have thus opted to also provide a tier that focuses on mostly oral skills and does not include written translation as part of the exam or part of the assignment (see for instance, the UK system, where interpreters that are employed by the Ministry of Justice can also have a ‘partial’ DPSI, which exempts them from the written translation part of the exam (Ministry Of Justice Citation2019), or the Austrian system of a ‘light version’ of the court interpreter exam for ‘non-European’ languages, where candidates do not have to pass the written exam (Österreichischer Verband der Gerichtsdolmetscher Citation2021). This suggests the need, for both training providers and also research, to pay more attention to the use and format of written and oral elements in intake tests (also see Lai and Mulayim Citation2010, 54).

Candidates’ motivation was mostly implicitly assessed through variables such as self-presentation, motivational letters and essays. In addition to the formal criteria and transversal aspects introduced above, such as soft skills and personality traits, the interviewees’ emphasis on applicants’ intrinsic motivation leads to more complex procedures. This focus on candidates’ intrinsic motivation also suggests, as mentioned above, that the official screening procedures sometimes seem to be applied with some degree of flexibility and leniency, particularly in cases where candidates do not or barely meet the requirements but are nonetheless accepted with the motivation of facilitating empowerment, or where training may help to cushion the worst negative consequences if candidates are already practitioners. This need to target social skills and motivational aspects as well requires the emergence of multi-component testing which will be discussed in the following section.

4.3. Entrance examination procedures

As already mentioned, seven providers established minimum criteria and set up some form of entrance assessment. At first glance, self-presentation seems to be the most favoured component for assessing suitability and criteria such as language skills, prior professional experience and personal traits; however, an in-depth analysis reveals that screening formats cover a wider spectrum of different practices, varying in terms of rigour and set-up. Whereas most providers adopt a low-level approach by assessing only certain competences, mostly spontaneously and intuitively without relying on previously defined, standardised evaluation systems or external experts, two institutions implement more elaborated screening formats including several phases and multiple components (G-UK, W-ULG).

R-BZ, for example, simply invites interested candidates to an ‘interview’ to assess their motivation and degree of language proficiency, for instance, by introducing ‘difficult terms’ into the conversation. In a similar vein, W-DIAK contacts interested candidates over the phone for a follow-up conversation to determine suitability, specifically when it was not clear whether applicants’ language proficiency was sufficient. I-UK also conducts a face-to-face interview with each prospective candidate, lasting between 90 minutes and two hours, which serves to provide prospective candidates with details about the training programme. Additionally, candidates are asked to present themselves and provide information on their backgrounds, particularly elaborating on prior work experience, language combinations and how they entered the field.

University-associated courses such as G-UK or W-ULG employ screening in more than one language, and W-ULG also offers language-specific training, whereas the remaining providers exclusively assess applicants’ German language skills. The entrance examination procedure established by G-UK, comprising an oral and a written test component, serves to assess whether applicants have a minimum level of B2 in both their languages. The written test aims to check prospective participants’ text production skills by asking them to compose short essays in both their languages and to elaborate on their professional background, foregrounding their prior interpreting experience. These essays also offer information on applicants’ motivation and motives for application. In the oral follow-up interview, prospective candidates present themselves before a three-party examination board consisting of interpreting experts and trainers. Two examiners assess the candidate’s linguistic competences in the respective language combination; a third examiner takes notes. The discussion also serves to elicit a prospective candidates’ teachability and their social and communication skills. The evaluation of both screening phases is based on an internal pre-established grading system. Finding suitable examiners for languages of limited diffusion (LLDs) proves to be particularly challenging. In some cases, G-UK language experts from other countries (Germany) participated remotely. What remains open is whether, and how, these experts’ suitability is ascertained.

The exam format of W-ULG consists of four parts with two oral elements and two written parts. In the first written task, the focus lies exclusively on proving C1 language proficiency. W-ULG is the only programme that uses the official CEFRL test for German. Candidates have to pass by 60% to take the oral exam. The latter includes two parts: a self-presentation, and a group discussion. After a 15-minute preparation phase, for which the candidates are provided with a sample text about current topics, the group discussion takes place. The main objective is to evaluate candidates’ social interaction and communication skills. Special attention is paid to candidates’ interaction in a group and whether they show team and cooperation skills. During the COVID-19 pandemic, the group discussion was suspended, and the oral part consisted of an audio-recorded self-presentation.

5. Data analysis and discussion

Along the three thematic threads that are covered in our research questions, we critically discuss issues related to the providers’ general educational philosophy and the motivational factors that seem to be guiding their processes of candidate selection, the range of specific test components and methods chosen by them to implement their overall educational goals, and the arguments brought forth for justifying or explaining the (non)suitability of their assessment approaches.

Our data suggest that, particularly with non-degree courses, entrance criteria and the providers’ selection procedures seem to be contingent upon the general philosophy underlying their programmes; providers’ reasons for applying or dispensing with specific criteria are inextricably linked to their overall objectives, which may be closely related to the corporate identity and philosophy of some providers, for instance church-affiliated providers or adult education institutions. One of the main recurring themes was the empowerment of course participants: several providers seek to offer participants a platform for communication and for fostering professionalisation, reflection, and networking. These providers target candidates who already have working experience in the field and they generally do not establish strict entrance criteria. Shorter course formats, especially, which target individuals with a migration background, see the empowerment of their course participants as one of the strong suits of their programmes. This was mirrored by interviewees highlighting ‘basic certification’ as a means for providing already practising interpreters with the chance for professional development, integration, and establishing awareness of their habitus as interpreters. Some training initiatives also have emerged mostly as a response to general market needs or specific in-house demands. In these cases, they may not be required and so providers do not feel the need to instal a more elaborate selection process. Here, they differ from university-affiliated institutions, whose motivation is not only to provide practice-oriented training but also research-fuelled interpreter education, and which are often required to comply with their institutions’ basic standards and thus have to establish a more refined process of candidate selection, though they also seem to employ some degree of leniency in specific cases. Interestingly, course providers with more elaborate procedures reflect very critically on the limitations of their standards, while providers with less formal policies brought to the fore the positive aspects of their courses; for instance, that training is available at all, or that it affords empowerment.

As regards specific criteria, a range of benchmarks was mentioned, not all of which were included in the testing procedures: language skills, professional experience, personality traits, soft skills and intrinsic motivation. Some providers decide to establish minimum standards, such as a minimum degree of language proficiency (starting with A2) or accounts of previous interpreting experience. Sometimes proof of language proficiency is not coupled with a formal language test or the submission of an official language certificate (for instance, the CEFRL certificate, which, in fact, would not be available for most languages other than a range of world languages). In these cases, providers rely on their subjective impressions of candidates’ (spoken) German proficiency and candidates’ subjective assessment of their level of proficiency in their other language(s), an approach that may be considered naïve on the one hand, but on the other hand is often the only available option due to an overall lack of resources. As already identified by Pöllabauer (Citation2020, 37, 37), these scant resources qualify as reasons why training is often offered as non-language-specific training, instead of catering to current local demands for specific languages, particularly LLDs.

A second group of providers base selection on a multi-phase selection process. Several criteria are tested to assess and determine candidates’ suitability for course participation, such as language skills, personality traits and intrinsic motivation, soft skills and communication management skills. These elements are mainly tested through language tests, either standardised (CEFR) or not, oral interviews, self-presentation, and group discussions, whereas intrinsic motivation was often assessed through motivational letters. Other forms of assessment that are mentioned in the literature, such as role-plays, shadowing exercises, or transfer exercises like written translation or sight translation were not mentioned in our sample of interviews.

Some interviewees, mostly those affiliated with universities and with a research or academic teaching background, did point to the fact that testing certain components mentioned in the literature would be beneficial but are, in fact, not practicable due to lack of resources. Other providers, mostly those with a focus on providing practice-oriented training and the goal of empowering course attendees, do not seem to pay much attention to assessment criteria and are less strict in accepting candidates, and thus also do not critically assess their entrance procedures. In these cases, the overarching objective is to provide those who already work as interpreters with a set of minimum skills, along the line that this is better than no training at all.

6. Conclusions

Our study has shed light on the fact that approaches to the definition of entrance criteria and the organisation of screening examinations are diverse, ranging from formal selection criteria coupled with extensive, multi-layered testing procedures to more informal assessment practices.

Whereas it is apparent that in conference interpreting, aptitude testing is very much focused on candidates’ already existing (linguistic, analytical) competences and their overall suitability for further training in the field of interpreting, our study points to differences in providers’ approaches to candidate selection in PSI. Product-oriented testing, even if this is a test element in some of the courses under review, seems to be of lesser concern, while personality traits, soft skills and role awareness are given more weight. Even if such aspects are not explicitly addressed by the test formats that are in place, they seem to constitute a major element of the general course objectives of some of the courses under review. If such test elements are included, however, the procedures in place seem to be rather flexible, often low-scale and not always research-driven, leaving room for improvement and professionalisation by means of seeking and incorporating some form of interprofessional cooperation with experts from the fields of social and educational psychology.

Generally, the providers’ overall educational objectives and the overarching needs they seek to address with their programmes have a considerable influence on their standards of candidate selection and the selection criteria installed for this process. Strict selection criteria are often not deemed appropriate when the focus lies on empowering candidates who already interpret anyway, or who are to be given some elementary degree of further training to staff and upgrade a provider’s in-house interpreter pool or to allow low-threshold access to courses. Stricter procedures are generally upheld by university-affiliated course providers; however, even those allow for some degree of flexibility and leniency and seem to have to juggle between their ideal notions of candidate selection and feasibility, particularly when it comes to speakers of LLDs and heterogeneous, often superdiverse backgrounds.

Considering the nexus between a provider’s overall course philosophy and target-group awareness, screening procedures that do not include strict candidate selection could be seen as a strategy to foster an inclusive and target-oriented approach to address market needs. Nevertheless, due to the sample size and the narrow geographical focus of our study, our findings are not generalisable. Further research on specific examples in other countries, as well as comparative and longitudinal studies, would help to broaden our understanding of entrance assessment criteria for PSI training or education initiatives and contribute to identifying potential gaps between the ideal forms of entrance procedures outlined in the literature and the ongoing practices in the field.

What our findings also suggest is that there is little communication between providers as regards basic standards and potential options for standardising and professionalising screening and assessment criteria, even in a small local context such as ours. In the absence of a national register and in view of the scant interest of established professional associations in PSI interpreting generally, and in extra-university training in particular, most providers seem to struggle with similar challenges, which perhaps could be addressed more efficiently with a greater degree of inter-institutional cooperation. This is also something we noticed when conducting the interviews for this study: while some providers are informed about the course structure and educational objectives of fellow providers, and sometimes actively keep in touch with each other, other providers seem to operate mostly autarchically, with hardly any contacts with other providers, staff with no background in academia or even interpreting and no strong connections to other stakeholders in the interpreting market such as professional organisations. These providers mostly seem to cater to local and regional needs of organisations in need of interpreters.

Providers also do not always proactively communicate their course objectives and standards for candidate selection. This also means that users of interpreters, who are often not well informed about the specifics of interpreting, may not know that interpreters in PSI domains may have divergent skills and that not all courses will provide their trainees with the same level of expertise.

The overall situation seems far from ideal and could perhaps be improved through increased transparency from course providers and increased communication among these, in addition to efforts to jointly campaign for more awareness among the users of interpreters for the need for standardised quality criteria for PSI and the professionalisation of this field. Ideally, too, though this is perhaps naïve, a comprehensive nationwide system of testing and accreditation at different levels, which should receive full support from all stakeholders, including political players and parties, and be based on both research and practical experience, might be a motor for development and stronger commitment towards professionalisation and high-quality interpreting that serves the requirements of those who are in need.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1. The qualitative content analysis served to obtain a first impression of how criteria and procedures for course admission, among other aspects, are communicated to applicants or other stakeholders, and how much weight is given to admission procedures among other factors. Our corpus of written course documents included curricula, information leaflets, FAQs, and homepage content. At the time of corpus compilation, except for R-BZ, all providers offered basic information both on their websites and through information leaflets and flyers, which indicates a general awareness of the need for marketing and promoting one’s training offers. The most detailed and complete information was provided for W-ULG and W-ULG (MA). Across the material, information on the specifics of course admission is eclectic, except for W-ULG, which provides detailed information, not only on the required level of language proficiency but also on the test format, preparation for this test, and an example for a test. Two of the courses (L-Plus and R-BZ), do not provide any admission-related information. The remainder of the providers mention language proficiency; most, however, except for G-UK1, do not specify how this criterion is tested. Experience in interpreting is also mentioned, though in none of the material do applicants find information on how to prove experience.

2. For a recent overview of dialogue interpreting courses in Austria, including SL and court interpreter training, see Pöllabauer et al. (Citation2021). The content of this overview has recently been digitised and can be accessed under https://dialogdolmetschendatenbank.at.

3. The QUADA course has recently been relaunched and renamed to “Lehrgang Dolmetschen (Asyl- und Polizeibereich), see https://vhs.at/dolmetschen. The course focus now also includes police interpreting; entrance prerequisites remain the same (personal information, VHS.lernraum.wien). Also, other courses may have initiated changes which are not reflected in this data set.

4. All interview passages were translated from German into English by the authors.

References

  • Amato, M., and G. Mack. 2022. “Interpreter Education and Training.” In The Routledge Handbook of Translation and Methodology, edited by F. Zanettin and C. Rundle, 457–475. London: Routledge.
  • Angelelli, C. V. 2007. “Assessing Medical Interpreters: The Language and Interpreting Testing Project.” The Translator 13 (1): 63–82. https://doi.org/10.1080/13556509.2007.10799229.
  • Angelelli, C. V. 2017. “Anchoring Dialogue Interpreting in Principles of Teaching and Learning.” In Teaching Dialogue Interpreting: Research-Based Proposals for Higher Education, edited by L. Cirillo and N. Niemants, 30–44. Amsterdam: John Benjamins.
  • Bancroft, M. 2015. “Community Interpreting: A Profession Rooted in Social Justice.” In The Routledge Handbook of Interpreting, edited by H. Mikkelson and R. Jourdenais, 217–235. London: Routledge.
  • Bergunde, A., and S. Pöllabauer. 2019. “Curricular Design and Implementation of a Training Course for Interpreters in an Asylum Context.” Translation & Interpreting 11 (1): 1–21. https://doi.org/10.12807/ti.111201.2019.a01.
  • Corsellis, A. 2008. Public Service Interpreting: The First Steps. Basingstoke: Palgrave Macmillan.
  • Delgado Luchner, C. 2019. “Contextualizing Interpreter Training in Africa: Two Case Studies from Kenya.” International Journal of Interpreter Education 11 (2): 4–15. https://tigerprints.clemson.edu/ijie/vol11/iss2/3.
  • Dresing, T., T. Pehl, and C. Schmieder. 2015. Manual (On) Transcription. Transcription Conventions, Software Guides and Practical Hints for Qualitative Researchers. 3rd. Marburg. https://www.audiotranskription.de/en/downloads/#practical-guide.
  • Grbić, N. 2023. Gebärdensprachdolmetschen als Beruf. Professionalisierung als Grenzziehungsarbeit. Bielefeld: trancript.
  • Gustafsson, K., E. Norström, and I. Fioretos. 2012. “Community Interpreter Training in Spoken Languages in Sweden.” International Journal of Interpreter Education 4 (2): 24–38. https://tigerprints.clemson.edu/ijie/vol4/iss2/4.
  • Hale, S. 2007. Community Interpreting. Basingstoke: Palgrave Macmillan.
  • Hale, S., and U. Ozolins. 2014. “Monolingual Short Courses for Language-Specific Accreditation: Can They Work? A Sydney Experience.” The Interpreter and Translator Trainer 8 (2): 217–239. https://doi.org/10.1080/1750399X.2014.929371.
  • Hlavac, J., M. Orlando, and S. Tobias. 2012. “Intake Tests for a Short Interpreter-Training Course: Design, Implementation, Feedback.” International Journal of Interpreter Education 4 (1): 21–45. https://tigerprints.clemson.edu/ijie/vol4/iss1/4.
  • Iannone, E., and K. Redl. 2017. “Ausbildungstrends in der Professionalisierung von LaiendolmetscherInnen.“ In Zum Umgang mit Migration.” In Zwischen Empörungsmodus und Lösungsorientiuerung, edited by U. Gross-Dinter, F. Feuser, and C. R. Méndez-Sahlender, 123–144. Bielefeld: transcript.
  • Kadrić, M. 2019. Gerichts- und Behördendolmetschen. Prozessrechtliche und translatorische Perspektiven. Wien: Facultas.
  • Kalina, S. 2000. “Interpreting Competences as a Basis and a Goal for Teaching.” The Interpreters’ Newsletter 10:3–32. http://hdl.handle.net/10077/2440.
  • Kuckartz, U. 2018. Qualitative Inhaltsanalyse. Methoden, Praxis, Computerunterstützung. 4th ed. Weinheim: Beltz Juventa.
  • Lai, M., and S. Mulayim. 2010. “Training Refugees to Become Interpreters for Refugees.” Translation & Interpreting 2 (1): 48–60. http://www.trans-int.org/index.php/transint/article/view/29.
  • Mellinger, C. 2021. “Preparing Informed Users of Language Services in Public Service Interpreting Courses. Differentiated Learning Outcomes for a Diverse Student Population.” In Global Insights into Public Service Interpreting. Theory, Practice and Training, edited by R. Moratto and D. Li, 171–184. London: Routledge.
  • Ministry Of Justice. 2019. Guide to Language Interpreter and Translation Services in Courts and Tribunals. London: Ministry of Justice. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/790316/language-interpreters-translation-services-statistics-guide.pdf.
  • Moser-Mercer, B. 1994. “Aptitude Testing for Conference Interpreting: Why, When and How.” In Bridging the Gap: Empirical Research in Simultaneous Interpretation, edited by S. Lambert and B. Moser-Mercer, 57–68. Amsterdam: John Benjamins.
  • Österreichischer Verband der Gerichtsdolmetscher. 2021. “Zulassungsvoraussetzungen.” https://www.gerichtsdolmetscher.at/Gerichtsdolmetscher/Zulassungsvoraussetzungen.
  • Pöchhacker, F., and M. Liu. 2014. “Introduction: Aptitude for Interpreting.” In Aptitude for Interpreting, edited by F. Pöchhacker and M. Liu, 1–5. Amsterdam: John Benjamins.
  • Pöllabauer, S. 2020. “Does It All Boil Down to Money? The Herculean Task of Public Service Interpreter Training: A Quantitative Analysis of Training Initiatives in Austria.” In Training Public Service Interpreters and Translators: A European Perspective, edited by M. Štefková, K. Kerremans, and B. Bossaert, 31–57. Bratislava: Univerzita Komenského v Bratislave. URL. https://9bb61ade-8110-4553-8092-e4ad094960e9.filesusr.com/ugd/3259ec_e1966adcbfbc400e9a23b8bb64bc5e86.pdf.
  • Pöllabauer, S., A. Bergunde, F. Grießner, A. Sourdille, S. Bahadır-Berzig, M. Behr, A.-M. Bodo, et al. 2021. “Dialogdolmetschen.at. Übersichtsdarstellung über Qualifizierungs- und Sensibilisierungsinitiativen im Bereich Dialogdolmetschen in Österreich (2001–2021).” https://doi.org/10.25365/phaidra.297.
  • Pöllabauer, S., and M. Kadrić, edited by 2021. Entwicklungslinien des Dolmetschens im soziokulturellen Kontext. Tübingen: Narr Francke Attempto.
  • Robinson, O. C. 2014. “Sampling in Interview-Based Qualitative Research: A Theoretical and Practical Guide.” Qualitative Research in Psychology 11 (1): 25–41. https://doi.org/10.1080/14780887.2013.801543.
  • Roda, R. 2000. “Interpreter Assessment Tools for Different Settings.” In The Critical Link 2: Interpreters in the Community, edited by R. P. Roberts, S. E. Carr, D. Abraham, and A. Dufour, 103–120. Amsterdam: John Benjamins. https://doi.org/10.1075/btl.31.13rob.
  • Russo, M. 2011. “Aptitude Testing Over the Years.” Interpreting 13 (1): 5–30. https://doi.org/10.1075/intp.13.1.02rus.
  • Russo, M. 2022. “Aptitude for conference interpreting” In The Routledge Handbook of conference interpreting, edited by M. Albl-Mikasa and E. Tiselius, 307–320. https://doi.org/10.4324/9780429297878.
  • Russo, M., and P. Salvador. 2004. “Aptitude to Interpreting: Preliminary Results of a Testing Methodology Based on Paraphrase.” Meta 49 (2): 409–432. https://doi.org/10.7202/009367ar.
  • Sawyer, D. 2004. Fundamental Aspects of Interpreter Education: Curriculum and Assessment. Amsterdam: John Benjamins.
  • Schweda Nicholson, N. 2005. “Personality Characteristics of Interpreter Trainees: The Myers-Briggs Type Indicator (MBTI).” The Interpreters’ Newsletter 13. http://hdl.handle.net/10077/2477.
  • Setton, R., and A. Dawrant. 2016. Conference Interpreting: A Trainer’s Guide. Amsterdam: John Benjamins. https://doi.org/10.1075/btl.121.
  • Skaaden, H. 2013. “Assessing Interpreter Aptitude in a Variety of Languages.” In Assessment Issues in Language Translation and Interpreting, edited by D. Tsagari and R. van Deemter, 35–51. Frankfurt a. M: Peter Lang.
  • Skaaden, H. 2016. “Admissions Tests Vs. Final Exams: A Comparison of Results from Two Performance Tests.” In Tolkutbildning - antagningsprov och digitala plattformar Interpreter Education: Admissions Tests and Digital Platforms, edited by C. Wadensjö, 1–30. http://www.diva-portal.org/smash/record.jsf?pid=diva2%3A967807&dswid=-3668.
  • Skaaden, H., and M. Wattne. 2009. “Teaching Interpreting in Cyberspace – the Answer to All Our Prayers?” In Interpreting and Translating in Public Service Settings. Policy, Practice, Pedagogy, edited by R. de Pedro Ricoy, I. Perez, and C. Wilson, 74–88. Manchester: St. Jerome Publishing.
  • Timarová, Š., and H. Ungoed-Thomas. 2008. “Admission Testing for Interpreting Courses.” The Interpreter and Translator Trainer 2 (1): 29–46. https://doi.org/10.1080/1750399X.2008.10798765.
  • Timarová, Š., and H. Ungoed-Thomas. 2009. “The Predictive Validity of Admissions Tests for Conference Interpreting Courses in Europe.” In Testing and Assessment in Translation and Interpreting Studies: A Call for Dialogue between Research and Practice, edited by C. V. Angelelli and H. E. Jacobson, 225–246. Amsterdam: John Benjamins.
  • van Dam, H., and P. Gentile. 2021. “Status and Professionalization of Conference Interpreting.” In The Routledge Handbook of Conference Interpreting, edited by M. Albl-Mikasa and E. Tiselius, 275–289. London: Routledge.
  • van Deemter, R., H. Maxwell-Hyslop, and B. Townsley. 2014. “Principles of Testing.” In Assessing Legal Interpreter Quality Through Testing and Certification: The Qualitas Project, edited by C.-G.-D. Miguélez, 27–39. Alicante: Universidad de Alicante.
  • Wadensjö, C., and H. Skaaden. 2014. “Some Considerations on the Testing of Interpreting Skills.” In Assessing Legal Interpreter Quality Through Testing and Certification: The Qualitas Project, edited by C.-G.-D. Miguélez, 17–26. Alicante: Universidad de Alicante.