3,142
Views
10
CrossRef citations to date
0
Altmetric
Design Review in the UK

Marketizing the governance of design: design review in England

ABSTRACT

This paper explores the marketization of design review in England, a tool of design governance that prior to 2011 had almost exclusively been within the purview of the state. This is no longer the case, but neither is it the case that the involvement of the market in the delivery of such services has inevitably undermined their public interest raison d’etre. The processes revealed in this paper, and its structure, draws evidence from three discrete research projects involving extensive stakeholder interviews (notably in London), ‘reunion’ workshops and a national survey of local planning authorities. It offers insight into a rare, and, according to those involved, ultimately successful example of marketization in design governance services, albeit one with potentially limited application unless the right conditions for such a market occur.

Introduction

Put simply, design review is a peer review process for the design of built environment projects. Globally it is an increasingly prominent tool in the design governance toolbox where it is typically offered as a public service.

The ‘modernization’ of public services has been much written about as a key tenet of the neoliberal state. Whitfield (Citation2006) argues that such processes encompass the withdrawal of the state, the commodification of services, the introduction of competition and market mechanisms, and the general embedding of business interests into previous state functions. In England, between 2011 and 2018, design review was subject to such a change, moving from a publicly funded service dominated – although not exclusively delivered – by a single national agency, to a typically privately funded activity that a diverse group of market providers compete to deliver.

For some, this represents the thin end of an austerity wedge that dominated public services during the period. As one London planning consultant interviewed for this paper commented: ‘This, essentially, is privatisation of the planning system by the back door and it’s being used to justify more austerity … we don’t need to appoint design officers, we’ll just bring in the review panel’. Yet the UK has a long history of championing the neo-liberal state, dating back to the privitizations of Margaret Thatcher’s governments in the 1980s, and whilst there are few industrial concerns left to privatize, considerable debate flows around the legitimacy or otherwise of involving the private sector in the delivery of public services (House of Commons Committee of Public Accounts Citation2014; The Observer Citation2018).

This paper examines how these processes of marketization in the governance of design – specifically design review – have come about, what are the characteristics of this new model, and how it is performing. It does so by drawing evidence from three research projects. These, respectively: examined the national context for design review in the public sector dominated era of the 2000s, utilizing multiple qualitative research methods; explored changing practices across England in the post-2011 austerity era, care of a national survey of local planning authorities; and, in this same period, explored practices in London in greater detail by tracing the progress of 12 projects through the design review process. The paper concludes with a look at the wider potential for marketization of design governance beyond the UK and beyond design review.

Formal and informal models of design review

Different models of design review exist. In the US, design review is typically a ‘formal’ tool of design governance (Carmona Citation2017) in that it is sanctioned in statute with a formal regulatory role. It is a relatively recent phenomena, originating in the 1950s, but not gaining traction until the 1980s when practices spread so rapidly that by 1994 Case Scheer (Citation1994, 2) was able to report that 83% of towns had some form or review, although with wildly varying practice and no national coordination. Her own definition of the practice signified its formal role – ‘the process by which private and public development proposals receive independent criticism under the sponsorship of the local government unit’ – and her findings showed that 82% of design review processes in the US were mandatory and legislated, as opposed to advisory.

Case Sheer’s analysis gave rise to a withering set of critiques around the potential for design review to be arbitrary, inconsistent, expensive, easily manipulated, under-skilled, subjective, vague, unfair, uncreative and superficial. For Punter (Citation2007), however, the critiques of design review were later answered by extending its remit beyond a narrow regulatory function. Schuster (Citation2005), for example, suggested that design review had the potential to act in many ways: like a jury, a peer panel review, a building inspector, as a mediator, an expert decision maker, a facilitator, or as a professional support group, a planning consultant, an expediter and as an educator.

Panels such as those created in Auckland (New Zealand) and Vancouver (Canada) have demonstrated the potential of this extended remit. In both cities design review has been used: to provide early and constructive advice to developers on specific development proposals; to advise their respective cities on policy and guidance frameworks; and generally to champion good design across the professional establishment and community at large (Punter Citation2003; Wood Citation2014). In these cases, the link between the design review function and formal regulatory processes is less clear cut, with design review being used more as a formative critique as opposed to a summative evaluation. This is a model that has precedents in the US, including notably in the Commission of Fine Arts, which since 1910 has, by order of Congress, been advising on design in the District of Columbia (Youngson Citation1990). In Canada the National Capital Commission has used a design review panel since the 1950s in a similar manner to review projects in the national capital area.

In the UK, design review has a long history dating back to the 1802 Committee of Taste (Carmona and Renninger Citation2018) and throughout has doggedly remained informal in nature, outside of statutory regulatory frameworks. Used in this manner, informal design review is an evaluation tool focused on improving the design quality to developments before they obtain formal regulatory consent. This is an approach developed through decades of national government directly funding design review. This occurred for 75 years through the auspices of the Royal Fine Art Commission, then the Commission for Architecture and the Built Environment (CABE) which, for a further decade until 2011, continued and expanded the practice. CABE also played a critical role in establishing a regional network of Architecture and Built Environment Centres (ABECs) across England with a remit to conduct design review in their regions.

Integrated and separated models

Formal and informal design review processes map onto a further conceptual distinction made by Carmona et al. (Citation2010), relating to whether the evaluation of design quality in planning happens in an integrated or separated manner. In ‘separated’ models () decisions on design are deliberately split from other planning/development concerns, with a separate statutory body – a design review board or commission – responsible for reviewing design. This either makes a binding recommendation to the zoning/planning board or grants a separate design consent itself. Such arrangements are widespread in the US where a formal process of design review often sits alongside but separate from zoning (Punter Citation1999). Under such circumstances the promoters of projects are compelled to undergo design review and, arguably, design issues will consistently receive an appropriate weighting before development approval is given or refused. However, a shortcoming is the difficulty in making the necessary connections between design and other development issues, some of which – such as decisions on land use zoning, density and transport/infrastructure provision – have major design implications. In these circumstances the danger is that consideration of design is reduced to ‘mere aesthetics’, throwing the legitimacy of such processes into question (Case Scheer Citation1994).

Figure 1. (a) integrated consideration of planning and design; (b) separated planning/zoning and design review. Source: adapted from Carmona et al. (Citation2010).

Figure 1. (a) integrated consideration of planning and design; (b) separated planning/zoning and design review. Source: adapted from Carmona et al. (Citation2010).

In ‘Integrated’ models ()), design is typically treated as an integral part of wider planning and/or zoning processes, in a single integrated process. In the UK, for example, judgements about the acceptability of design are ultimately made by local planning authorities, who may or may not seek the advice of an ‘independent’ design review panel, but whom ultimately are responsible for weighing and balancing the advice received against other factors and determining the weight that should be given to it in the formal decision-making process. In such a system, design review has no formal status, and developers are not obliged to submit their projects to its scrutiny. Nor are planning authorities obliged to take design advice on board, or even to do so seek it in the first place, although they are encouraged to do so in national policy (DCLG Citation2018, para.129).

The danger is that design becomes side-lined by other factors and sometimes may barely be considered at all. There is also a danger that, in straitened times, processes that are not tied in to the legislative decision-making framework can quickly and easily be chopped out in order to make some rapid savings, or perhaps hived off to the market to provide. This is what occurred in England in 2011, leading to the ‘marketization’ of design review.

The dawn of a new market

The initial body of evidence underpinning this paper is contributed by research which examined this transition as part of a larger study examining the work and legacy of the Commission for Architecture and the Built Environment (CABE). This work employed an inductive research methodology that sought to learn from the specifics of practice and apply that to an integrated theory of design governance. The essence of the approach was a multi-dimensional impact analysis of CABE’s work, allowing rich empirical evidence to be gathered (Carmona, de Magalhaes, and Natarajan Citation2017).

This research, conducted between late 2012 and August 2014, included an extensive interrogation of archival sources alongside 39 detailed interviews with stakeholders from within and outside of CABE (including in government) who had been centrally involved in establishing and developing the organization and its approaches, and eventually in shutting it down. Interviews were also conducted with key opinion formers on record as being either supportive and/or critical of the organization at various stages in its history. The heart of the research involved 24 ‘reunion’ discussions (workshops) with those involved in various initiatives of the organization. The reunions focused on the different tools utilized by CABE, but almost all also encompassed some discussion of design review because of its dominance in perceptions of the role and impact of CABE. The methodology is fully discussed in (Carmona, de Magalhaes, and Natarajan Citation2018).

CABE and the spread of design review

Between 1999 and 2011 the Commission for Architecture and the Built Environment (CABE) was the UK Government’s advisor on design for England. Design review was CABE’s most high-profile service but was always just one amongst a diversity of effective evidence, knowledge, promotion, evaluation and assistance tools of design governance deployed by the organization (Carmona Citation2017). It was also informal in the sense that the service was not formally part of any statutory process of regulation or approval and only ever had an advisory status. Ultimately CABE hoped its design review programme would raise expectations of design and help build a culture of quality across England (CABE Citation2005).

Yet the more immediate function of design review was to improve individual schemes by providing advice from a pool of experts whose joint experience could be brought to bear. As explained in the publication How to do design review, design review ‘brings a breadth and depth of experience that may not be available to the project team or to the planning authority; it can offer expert views on complex issues such as sustainability; and it can broaden discussions and draw attention to the bigger picture’ (CABE Citation2006, 5). The distinguishing feature of design review was that it provided advice which was independent, and bespoke from experts unconnected with the schemes under review. As explained in CABE’s 10-year review, ‘most developers respect a judgement based on the opinion of professionals with no stake in the project but a great deal of experience from highly successful schemes elsewhere’ (CABE Citation2009, 12).

CABE could not oblige developers to submit their schemes for review, and nor could they require local authorities to take their advice on board, but despite this never had a difficulty in bringing large volumes of ‘nationally significant’ schemes to review; in 2007/2008 achieving a high point of 1203 submissions (CABE Citation2008), although only approximately a quarter of those were reviewed.Footnote1 CABE provided a national design review service that was generally (if not universally) respected and that had a positive impact on the quality of development and aspirations for design in developments across England (Carmona, de Magalhaes, and Natarajan Citation2018). Yet the national provision of review had limitations since the workload was extremely high, and CABE did not always have suitable expertize or knowledge of local areas.

To address this, CABE entered into a partnership with the Architecture Foundation in 2001 with the purpose of assisting it in the creation of Architecture and Built Environment Centres (ABECs) around the country. By 2010, 22 ABECs had been created and were supported through direct public investment via public sector grants and contracts. CABE was not the largest funder of these organizations, but by directing their limited funding towards organizations with regional or sub-regional coverage, they were able to establish a complete network of regional design review provision across England (CABE Citation2010).

In 2001 and 2003, CABE reported that 23 and then 26% of local authorities made use of a design review panel in assessing the design quality of planning applications (CABE Citation2001, Citation2003) and by 2009 the figure had risen to 50% (CABE Citation2009), albeit many of these only intermittently. CABE’s own 10-year review suggested that over a decade of operation CABE had reviewed over 3000 major development proposals at an average cost of £2500 per review (or 0.1% of construction costs). In addition, that the organization, at some point had reviewed schemes from 85% of English local authorities, and that 70% of subsequent planning decisions were taken in line with the advice receivedFootnote2 (CABE Citation2009). By this time CABE was in receipt of approximately £12 million in public funding annually, approximately 20% of which was used to fund its design review activities. A further £1.86 million was used to fund the ABEC’s regional design review work (CABE Citation2011).

The bite of austerity

A year after celebrating its tenth birthday, history records that the global financial crisis of 2008/2009 led to a severe shock in the public finances of the UK, and from this point on CABE (like other public services) was operating under the shadow of economic retrenchment. CABE was clearly an easy and quick cut to make, and a cut far less visible (to the public) than cuts to the types of high profile cultural institutions that were the alternative for the organization’s sponsoring Ministry and which had high profile advocates fighting for their slice of the national cake (Serota Citation2010). The response from the architectural profession was, at best, mixed, with comments posted online that included: ‘Nice to have a little sugar today to sweeten the pill’ (quoted in Carmona, de Magalhaes, and Natarajan Citation2017, 120). Comments often reflected the erroneous perception held by many that CABE amounted to little more than its design review function, a function that, as one post revealed, had created many enemies for the organization:

CABE was an organisation which was based around the flawed theory that an overgrown architectural “crit” could improve the design of buildings. It was filled with the self-righteous self-important old boys and girls of the architectural establishment who could happily sit around carping about other architect’s designs till the cows come home. Name another profession that openly criticizes its colleagues work? No wonder we are so devalued by the rest of the industry (anonymous online post, quoted in Carmona, de Magalhaes, and Natarajan Citation2017, 120).

At the close of the 85-year era of nationally funded and led design review, the function was by no means universally supported, and considerable doubt existed around whether design review would survive at all. Concurrently, however, the Design Council (also a casualty from a national cull of quangos) had obtained permission to continue solely as a charity.Footnote3 Talks between the two Chief Executives revealed both had a common cause and a potential synergy that could be exploited if a merger was to occur. Following a guarantee of transitional funding from the government, Design Council CABE was incorporated as a private subsidiary of the new charitable Design Council, and, as a key protagonist within CABE recalled: ‘It was that which provided an opportunity for us to see whether we could salvage something from this mess’.

The funding came in the form of £5.5 million over the accounting years 2011/2012 and 2012/2013, a large part of which was intended to allow the new organization to develop its own income streams, most notably by commercializing design review, and in the process bump-starting a new market. In fact, because CABE was effectively moving out of the public sector, EU competition rules required government to tender for this work, which was then publicly advertised. Three other organizations (thought to be the RIBA, the Prince’s Foundation and the Architecture Centre Network) also bid for the work, each seeing the commercial possibilities of design review which they believed would be more lucrative than it turned out to be. CABE beat off the competition to secure the transitional funding.

In April 2011 Design Council CABE and the regional ABECs found themselves immersed in a new context defined by the overwhelming drive for austerity. Whilst the withdrawal of funding at the national level was dramatic, arguably of equal magnitude was the rapid squeeze of local government finances, most notably those relating to the built environment.Footnote4 Even if they had wanted to, local government was no longer in any position to purchase design review services, meaning that the future funding of design review could only come from one place, the private sector.

Defining a market (and getting it to work)

Despite the almost total withdrawal of funding at national, regional and local scales, the Conservative-led Coalition Government was never ideologically hostile to the pursuit of good design through the governance of design; indeed, the Conservatives had re-committed themselves to this agenda in the run up to the 2010 election (Conservatives Citation2010). An early initiative of the new government was to streamline the voluminous planning policy (1300 pages of it) that had accumulated over the New Labour years and to replace it with a 65-page National Planning Policy Framework (NPPF) published in March 2012 (DCLG Citation2012). The new framework laid out unequivocal support for the importance of design, and included an important new addition to national policy which statutory CABE (before its demise) had lobbied heavily for. It stated: ‘Local planning authorities should have local design review arrangements in place to provide assessment and support to ensure high standards of design’. Local authorities ‘should also, when appropriate, refer major projects for a national design review’ (para. 62), something, a footnote noted, was ‘currently provided by Design Council CABE’.

Coming so soon after the winding up of CABE as a publicly funded organization, the inclusion of the new guidance may seem surprising. However, for a government aspiring to high quality design but unwilling to support it financially, it was a logical step on the road to the creation of a market in the governance of design services. Interviews with those involved in the negotiations reported that the Minister of State for Housing and Local Government, Grant Shapps, was particularly keen to work with CABE to find a solution to the funding crisis, although not at public expense. As design review seemed to be the most easily commoditized tool in the design governance toolkit, it was on that basis that in April 2011 twenty, largely design review staff, transferred from CABE to the new company.

Thus, whilst the Coalition Government oversaw the demise of publicly funded CABE, alongside a good part of the larger design governance infrastructure that had been gradually built up across the country since the mid-1990s, it also played the key role in instigating the stuttering but ultimately viable emergence of a market in design review. Underpinning this was the necessary growth of a new bottom-up entrepreneurialism amongst service providers, many of whom had previously been able to rely on direct public funding for their existence and which now had to learn to sink or swim in this new market.

Design Council CABE

For Design Council CABE, operating within the market quickly revealed that there were other more nimble and market-savvy organizations only too willing to grab a slice of the diminishing action. Despite this, and a range of considerable early challenges, setbacks and delays (Carmona, de Magalhaes, and Natarajan Citation2017) – including the almost total renewal of the members of staff who had come over from publicly funded CABE – the situation eventually gave way to a model that fully embraced the new market realities. Key aspects of the new model included:

  • Payment by developers against a schedule of charges. In 2015 these were optimistically advertised as £4000 for a ‘Preliminary design workshop’, £8000 to £18,000 for a ‘Phase one pre-application presentation review’, £5000 to £8000 for a ‘Phase two review’ and £3500 for a ‘Planning application review’.Footnote5

  • Recruiting a network of 250 Built Environment Experts (BEEs) from across the sector representing all strands of interdisciplinary expertize for the organization to call upon as and when the demand arose. BEEs were to be paid a standard rate for their involvement in reviews and nothing if they were not involved.

  • A move away from reliance on general design review on an ad hoc basis to focus instead on signing up particular local authorities in order to provide a comprehensive design review service. Amongst the early takers of this service was the City of Oxford, a panel that meets once a month with a discounted fee paid per review by the council which is then reclaimed from developers.Footnote6

Design Council CABE initially used their transitional money from government to try to pump prime the market, particularly in London where they wished to establish the organization’s claim over this key territory (Bishop Citation2011). The intention was to encourage Boroughs into a pay-to-use design review habit by asking Boroughs to sign a memorandum establishing that they would use the organization’s services with government money covering the first £20,000 of costs, after which developers should foot the bill.

Interviewees reported that Design Council CABE did a hard sell and made sure everyone knew what was on offer, although even with the substantial sweetener less than half of London’s Boroughs signed-up to the new service, and once the free sessions had been used up the number involved fell dramatically. One observer concluded, ‘this was a glaring bit of inappropriate support and a waste of public money’, although the initiative, alongside the push given by government in policy, did give rise to a significant interest in London amongst the Boroughs which eventually led to many setting up their own panels.

For Design Council CABE, however, only one Borough – Royal Greenwich – signed up following the initial push.Footnote7 The situation was revealed in the organization’s financial results which showed that for an organization with an inherited public sector mode of operation and associated overheads, it was a difficult model to make pay. By the end of the 2013/2014 financial year, its first year of operating without transitional funding from government, Design Council CABE had conducted just 55 design reviews and recorded an operating loss of £374,000 (Design Council Citation2014). Following reductions in staff from 20 to 12 and re-launch as the ‘Cities’ programme in 2014 (with design review pitched as one amongst a package of design governance services for local authorities to purchase), by the close of the 2016/2017 financial year the gap between expenditure and income had risen to £700,000Footnote8 (Design Council Citation2017), albeit on a much larger turnover. Losses were recorded for every year in between.

The Design Council CABE team (in 2018 reduced to eight) continues to proactively develop their offer of a comprehensive design review service to local authorities, where possible bolstered with other ‘value adding’ training, enabling and support services.  They see the certainty of long-term income generated by such deals as a more attractive and commercial proposition than the uncertainly of ad hoc reviews from around the country. In effect, any meaningful national design review service has all but died. Survival was also only achieved at the expense of the organization abandoning its former national leadership role and focusing almost exclusively on those aspects of its operations for which income could be generated, notably design review.

A diversity of providers

In the words of one panel manager, even if ‘London was not proving to be lined with the design review gold that the Design Council had hoped for’, the period largely met the government’s intentions of jump starting a market where none had existed before. Outside of London, the end of transitional funding to the regional ABECs quickly led, by June 2012, to the closure of their umbrella organization – the Architecture Centre Network – (Fulcher Citation2012), and its replacement in 2013 by a much looser alliance, the Design Network. This was a network of the eight organizations that had hosted the regional design review panels for CABE and which, alongside Design Council CABE, had benefited from the transitional funding.

Covering all regions of England outside of London, the Design Network represented an attempt to carve the country up between the eight (leaving London for Design Council CABE), and in a monopolistic fashion to pursue a business model based on exploiting the new advice in national policy that design review arrangements should be put in place to support planning decision making (Hopkirk Citation2013). The aspiration was quickly undermined, however, when other providers emerged and when it became clear that Design Council CABE had no intention of restricting operations to London. It was further tested when, later in 2013, Shape East (covering the East Anglia region) was declared unviable and ceased to operate. The remaining seven were now joined by Urban Design LondonFootnote9 (rather than the Design Council) offering free design review for public realm schemes in London. The vacant eastern region was taken over by Design South East, adding this to their existing territory.

Often with minimal resources the regional providers have nevertheless continued to operate, and have increasingly moved to a now tried and tested market model. First, this entails being contracted by local authorities to run a regular dedicated panel on their behalf, and second, being paid for their services by developers, either directly or indirectly via a payment to the local authority. Charging also rapidly became the norm in local authorities who selected to run their own panels in-house.

reflects the array of design review delivery organizations now operating across England, although the relative vibrancy of the actual market varies substantially across the country. In the north-east of England, for example, just one organization – NEDRES – provides a design review service; in the south-west the South West Design Review Panel (managed by not-for-profit Creating Excellence) provides a regional service; Cornwall County Council maintains its own panel; The Architecture Centre, Bristol, runs the City’s design review panel; and a private consortium, The Design Review Panel, operates throughout Devon and Somerset to deliver, according to their own publicity, ‘a cost effective’ alternative.Footnote10

Table 1. Range of design review organizations operating across England in 2015.

Commenting on this state of provision, one prominent design review member suggested: ‘Let 1000 panels bloom, provided they’re offering reviews of sufficient quality’. For others, the demise of a central publicly funded watchdog to maintain standards represented a serious concern in the face of such diverse provision, and the inevitable divergence of practices that would ensue (Urban Design London Citation2015). As a commentator observed: ‘It’s almost like you need the core organisation to keep everything honest, but today most review is beholden to nobody’.

Independence and the commercial imperative

A related issue is how the commercialization of services impacted on the essential relationship between the provider and recipient of design review, and in particular on the independence of the advice given. For some: ‘That was in the DNA of the old Commission, complete independence, you say what you think and you’re beholden to nobody, which no longer exists. It means that developers, especially the bad ones, can ask that simple question, “do I have to take the risk of taking my scheme to review and getting a stinger of a report”, because if the answer is no, why take the risk’. As an insider with experience of both CABE and post-CABE design reviews reported: ‘I don’t think it’s changed anything that we’ve written, but it changes the atmosphere because there’s always that thing in the back of your mind which is saying “now on the basis of our performance at this review, never mind what we’ve actually said, is it more or less likely that these people would come back to us with their next scheme?”’.

In these ways the commercial imperative profoundly changed the essential relationships within design review, a reality that was quickly recognized by Design Council CABE, members of the Design Network and by others, many of whom subtly changed their practices to move away from the more confrontational – and arguably challenging – style of old design review practices to a more supportive workshop style. Justifying the change, Design Council CABE repeatedly argued that the new processes could be more constructive and less confrontational whilst still retaining the organization’s independence in line with its charitable statusFootnote11 ‒ more formative and less summative. It certainly had the potential to address some of the reoccurring criticisms levelled at design review prior to 2011, notably that it was frequently too detached and paternalistic in style (Carmona, de Magalhaes, and Natarajan Citation2017).

However, for some the new reliance on the market had fundamentally changed the nature and efficacy of design review which they saw as no longer truly a ‘public’ service. For example, Jon Rouse (the former Chief Executive of CABE) remarked:

The thing about CABE is that it was set apart from the market. One anxiety I have is that the integrity of the process is not compromised by the need to charge. For a really bad scheme, if an architect or developer has paid £20,000 for the privilege [of being reviewed], is it that easy for them [Design Council CABE] to turn around and say “start again, it’s rubbish”? (quoted in Rogers and Klettner Citation2012).

Whether independent or not, operating in the market, design review providers could no longer afford to alienate the clientele on whom they relied to pay the bills, and neither could they afford to do reviews that were not ‘useful’ to those commissioning them. However, given that the vast majority of their work was commissioned by and conducted for local government (and other public sector organizations), even if paid for by the private sector, design review was clearly still being conducted overwhelmingly with the public interest at heart. Arguably, therefore, the need for repeat public sector business represented the ultimate guarantee of probity and quality.

The Design Network organizations across the county also quickly realized that without a high-profile voice such as the old publicly funded CABE promoting the need for good design and design reviews nationally, it would be hard to survive on a single product alone. They concluded that greater diversification and a more supportive offer was required, extending into community engagement, arts and culture, project support, capacity building, schools education, and professional and councillor training. As one commentator wryly observed, ‘Maybe they had learnt from CABE’s busiest years; design review might be the icing on the cake, but nobody really likes icing on its own, you need cake too’. Unfortunately, whilst each of these services offered potential to extend the market and at the same time help to change local cultures and priorities on design, most were even more marginal, and certainly less predictable, than design review. Nevertheless, to survive in this climate, regional and local organizations without the ability to cross-fund from more profitable activities (as Design Council CABE was able to do) adopted a common strategy with three elements:

  • Being entrepreneurial, supported by a smorgasbord of services (the more diverse the better).

  • Reducing fixed overheads (personnel and premises) and utilizing an ‘expert’ network (local and/or regional) that can be flexibly called upon in different combinations as and when required.

  • Carefully tailoring the offer to local circumstances.

In very different parts of the country, from the largely urban West Midlands to the largely rural south-west, and from the relatively wealthy south-east to the relatively poorer north, this formula is now repeated across the range of design review providers.

A growing market, nationally

To understand this new context, a second source of evidence underpinning this paper was contributed by a short survey conducted in early 2017, across the 374 local planning authorities in England. Conducted using Freedom of Information provisions, 201 local authorities responded to the survey, representing a response rate of 54% of English planning authorities and a broad spread regionally and of urban, semi-urban and rural, affluent and less affluent areas. Primarily the survey sought to elicit data on design capacity and skills in local authorities (Carmona and Giordano Citation2017), but three of its nine questions focused on the use of design review: (1) Does your local authority make use of a design review panel of any kind in assessing the design quality of planning applications? (2) Estimate how often your local authority makes use of a design review panel? and (3) Who manages the design review service that you use? Each had simple closed categories for response, with an open opportunity to justify their approach. The full list of survey questions, list of responding authorities and the methodology are discussed in (Carmona and Giordano Citation2017).

The survey revealed that the numbers regularly using design review services had increased to 64% (), suggesting that the marketization of design review had led to an increase in uptake, or at least had not significantly undermined the upward trend set in train between 1999 and 2011. The headline figures were not, however, the whole story, as a large differential in the level of use was also revealed. When asked how frequently they used a design review panel, only 19% of authorities used a panel regularly, meaning monthly or quarterly. A further 37% used a panel occasionally, and the remaining authorities used design review only very rarely or not at all

Figure 2. Percentage of authorities regularly using design review.

Figure 2. Percentage of authorities regularly using design review.

Among the 62% of those using the panel very rarely or occasionally, the most common explanation for this pattern of use was that only large or unusual planning applications were subject to design review. Some commented that they expected the developer to engage the review panel, and did not see it as their responsibility, others that they would only use design review if the applicant was willing. For these authorities, there was a noticeable tendency to look to the development community to take the lead in these decisions, indicating that (in such cases) an almost complete abdication to the market had occurred.

Among reasons for not using a panel at all, cost was most frequently mentioned – despite design review being chargeable to developers – together with worries about delaying the development process and uncertainly over the accountability of external panels. The consensus among those who commented suggested that, given the budget and greater clarity with regard to the process of design review and its impact on the overall planning process, more local authorities would welcome the opportunity to use a design review panel.

When asked about how panels were managed, one-third of respondents revealed that they used an internally managed panel, whilst just over one-third used an externally managed panel.Footnote12 Twelve per cent used another public-sector panel (for example, a panel managed by another local authority) and approximately 4% used more than one panel. Geographically, local design review panels were less common away from the south-east, south-west and London, and virtually absent in the east. The geographical spread suggested that, where successful panels have been established, the practice of using design review quickly spreads to neighbouring authorities, thus establishing clusters of use. This was most obvious in London where the greatest density of panels can be found, with 80% of London Boroughs using design review either regularly, or on an ad hoc basis.

A fragmented and still immature market

Overall, the survey suggested that despite seven years of a growing market in design review services, an ongoing ignorance about how design review might be used and charged for was still apparent within local government. Clearly there is still scope for the market to grow and mature (even in London), and there is definitely scope for the market players to better communicate their products and the value they can add. However, there is also a fragmentation of the market, and no coordination across the sector to try and build the total market.

In February 2016, this was also the message from the first ever Parliamentary Select Committee on National Policy for the Built Environment, a six-month enquiry held within the House of Lords to scrutinize policy making related to the built environment. Whilst the Select Committee did not question the move of key design governance services into the market, they argued that provision was often inconsistent and disjointed with an insufficient level of activity to justify a wider investment by the sector in design review. The recommended solution was more government action, this time to mandate design reviews for all ‘major’Footnote13 planning applications with the aim of driving up the volume and ultimately the quality of such activities, and as a means to encourage the market to mature (House of Lords Select Committee on National Policy for the Built Environment Citation2016).

Government did not heed the call, and in 2018 their revised National Planning Policy Framework (NPPF) rowed back on earlier provisions in the 2012 NPPF by dropping the all-important statement that: ‘Local planning authorities should have local design review arrangements in place’. Instead they included the more bland assertion that ‘Local planning authorities should ensure that they have access to, and make appropriate use of, tools and processes for assessing and improving the design of development’ (including ‘review arrangements’ – para. 129). The impact of this move on the still immature market has yet to be seen.

The burgeoning London market

Despite the situation nationally, London was revealed to be – by far – the most mature territory for design review. Consequently, over a 10-month period from March to December 2017, the third source of evidence underpinning this paper focused on the new landscape for design review in London. Whilst part of a larger study which ostensively examined the practices and impacts of design review in the city (see Carmona Citation2018), this work also revealed much about how the new market was operating.

The methodology was qualitative incorporating interviews with 40 key individuals across six main categories: design review service providers; local planning authority officers and councillors; panel chairs and other design review panellists; highways authority project officers; applicants for planning permission (developers); and architects and other designers (working for developers). The research examined the practices of 12 design review panels across London encompassing seven Boroughs, one development corporation, a utilities provider, the Greater London Authority and Transport for London (TfL), which focuses on public realm schemes.

Twelve projects that had been reviewed between January 2014 and December 2015 were chosen across eight of the panels and for each, the experience of journeying through the design review process was traced with the different parties that had been involved in the process. Projects were chosen by the research team from data provided by the various panel managers in order to ensure a balanced coverage of the range of schemes that had passed through the various panels during the period. The methodology is further discussed in the full research report (Carmona Citation2018).

Attitudes, aspirations and panel types

Attitudes to both design and design review vary significantly across London’s 33 Boroughs, although they fall into four distinct camps, as represented in . These reflect, first, whether the pursuit of design quality is prioritized by authorities, and second, whether design review is included within the armoury of approaches used to address the concern. Whilst, in the turbulent economic climate of 2011/2012 a market in design review services initially struggled to establish itself in London, recently it has burgeoned. Today there are approximately 30 panels operating across the city with the growth reflecting a noticeable move amongst existing users of design review from the ‘design quality not prioritized’ to the ‘design quality prioritized’ camp. In turn, this is encouraging Boroughs to establish a more systematic approach to design review, moving away from occasional ad hoc use.

Figure 3. Attitudes to design and design review amongst London’s Boroughs.

Figure 3. Attitudes to design and design review amongst London’s Boroughs.

When asked, those managing, commissioning or serving on design review panels and designers presenting to panels, had a series of complimentary aspirations for design review. These broadly focus on achieving better design and place-making than would otherwise be achieved without a panel, notably by empowering local planning authorities to demand better standards from developers ‘wanting to do something good, rather than something that’s good enough’. For their part, developers were more circumspect in their aspirations for design review, and whilst accepting that the practice did raise standards of design, its use was often viewed as a necessary additional hurdle to be overcome on the way to getting planning consent.

Encouraged by the changes in national planning policy, there has also been a strong element of Boroughs looking at each other in order to learn from and adopt the best practices of their neighbours. As one interviewee commented: ‘Our chief executive had had a very positive experience at her former Borough, they’d had a panel there and she’d seen it work well’. The increasing demand for development across both Inner and Outer London and the squeeze on resources within local government have also led Boroughs to seek innovative means to assist decision making within local planning authorities, including greater use of design review. This has led to a professionalization of design review as Boroughs that had unofficial, sometimes self-appointed, panels have been switching to an official panel with an associated charging regime. Sometimes there has been opposition to this when local panel members felt disenfranchised, but the change has typically been driven by a realization that such informal practices were not able to deliver the step-change in design quality that was desired.

Four types of panel have resulted (). First, those set up and managed in-house within a public authority (usually a London Borough). Second, those managed on behalf of a public authority by an independent third-party contractor. In-house providers can be further divided between those that charge for design review services and those that are offered free to the end user. External providers always charge and can be divided between not-for-profit providers of design review services and private companies.

Figure 4. Types of design review panel in London.

Figure 4. Types of design review panel in London.

The research revealed no evidence that any of these four models was intrinsically superior to the others (with regard to the quality of service or outcomes), and, when properly resourced, each were capable of delivering positive results (). Equally there was no evidence that particular types of Boroughs (central, inner or outer London) favoured one form of provision over another, or indeed favoured ‘provision’ over ‘no provision’. All forms of provision (and none) are geographically distributed across London. There were, however, significant advantages and disadvantages that became apparent when comparing in-house against externally managed panels, and notably when comparing paid for services against those that are free to applicants.

Table 2. Design review in practice.

To pay or not to pay?

If the service is to be chargeable, then the first decision concerns who will manage it. In 2018 three external providers offered design review services in London, together running 12 panels:

  1. Design South-East (a not-for-profit regional provider), offered services in London for the first time in 2017, to the Borough of Kingston upon Thames

  2. Frame Projects (a private company), providing services to the Boroughs of Camden and Haringey, to the London Legacy Development Corporation (c/o Fortismere Associates), Old Oak & Park Royal Development Corporation, and to the High Speed 2 rail project.

  3. Design Council CABE (a not-for-profit national provider), providing services to the Boroughs of Barking & Dagenham, Bexley, Brent, Greenwich and Waltham Forest, and to the Thames Tideway infrastructure project.

A key consideration is how different players position themselves with regard to the service package being offered, with many interviewees clear that a free market meant that there could be no one-size-fits-all approach. Instead, it is quite appropriate for local authorities to seek to tailor their design review arrangements to suit their particular circumstances. Some providers, notably Design Council CABE, see themselves as a premium provider, able to offer a package of ‘other’ complimentary services around design review, whilst others offer what was described by one planning officer as ‘a meat and potatoes’ service, meaning just design review.Footnote14

In such a context, purchasers need to have regard for the quality of the service being purchased and how this is reflected in the price being paid. However, whilst there were clear differences between services run on a shoe-string or free basis, and those that were professionally organized (either in-house or externally) and charged for, qualitative differences between the various professionalized providers of design review services in terms of how panels are run and the outcomes they achieve, were harder to detect. This suggests that in London competition is largely on price rather than on the level of service, although providers would certainly dispute this. As a manager at one design review organization commented: ‘One of the issues around design review is the different layers of the market. If we’re bringing together a national, or an international group of experts and yet other players in the market are offering a much less expensive model, but with different quality of results, the question becomes how you value quality and how you pay more for quality, if that’s appropriate to your scheme’. In other words, a premium service should attract a premium price.

At a London-wide scale the Greater London Authority (GLA), through the Mayor’s Design Advocates, and Transport for London (TfL) (care of Urban Design London), are the largest providers of free design review in London. The former focus on strategically significant projects whilst the latter focus on reviewing publicly funded public realm projects over £1 million value (projects often led by the Boroughs). These London-wide services are fully funded by the Mayor and are run in-house on a professional basis. A small number of Boroughs also provide a free pared down service with inevitable compromises such as the absence of a compulsory site visit for panellists, less frequent and shorter reviews, the use of voluntary (unpaid) panellists and, in the absence of dedicated staffing, a greater strain on internal staff time.

Given a general willingness of developers to pay in anticipation of a smoother planning process, alongside the professionalization of design review that it was clearly possible to achieve through such means, the continued use of free or inadequately funded design review seems increasingly difficult to justify. As one panel manager commented: ‘Applicants are happy to pay for these things, that’s what they do, but they need the service and they need that whole business mind in how you run and involve them’.

In-house or external?

The research revealed a range of perspectives on whether design review is best run in-house (within Boroughs and other agencies) or contracted out to a specialist (market) provider of design review services (). The benefits of external provision coalesced around the ease of setting up and running panels and the cost effectiveness of this model. The need for a proven financially neutral model was particularly important to commissioning Boroughs, amongst whom the national survey had revealed that the perceived cost to the public purse of providing design review was the number one reason for not using a panel. With almost half of London’s Borough panels managed in this way, the package offered by specialist external providers was attractive to hard pressed, risk-averse councils.

Table 3. Advantages of contracting out and remaining in-houseFootnote20 .

Thirteen panels are managed in-house within Boroughs (Croydon, Enfield, Islington, Kensington & Chelsea, Hammersmith & Fulham, Harrow, Hackney, Lewisham, Merton, Newham, Southwark, Tower Hamlets and Wandsworth). Amongst these, the dominant perspective was that design review should be part of a constant conversation between developers and their design teams and the local authority, and if there was too much of a separation, design review could become ‘out of sync’, leading to mounting tensions. Analysis of the externally managed panels suggested that this had not occurred, and that, however managed, the work of panels could be successfully integrated into other pre-and post-application processes. There was also a perception that payments for design review could be used to help build design expertise within local authorities, with any surplus of income used to support internal design capacity, rather than contributing to the ‘profits’ of the external organization (see below).

It became apparent that when setting up or re-tendering panels, local authorities are increasingly doing considerable research to review the various models in order to select which is right for them. As one panel manager commented, ‘We looked at the Islington panel, we went to some of their review days, the LLDC panel, the Newham panel and the Old Oak Common and Park Royal panel. So that helped mitigate some of the challenges that we could have faced’. At the time of writing, at least two externally-managed panels were in the process of switching their long-term provider (one had just moved to a new external provider and one was considering taking the service in-house). This activity suggests that, first, a good deal of shopping around is not uncommon, with the re-tendering for contacts an everyday occurrence for providers; and second, that as a consequence, a real market is clearly in operation.

Operating without a panel

At the time of writing, 12 of London’s Boroughs (Barnet, Bromley, The City, Ealing, Havering, Hillingdon, Hounslow, Lambeth, Redbridge, Richmond upon Thames, Sutton and Westminster) had no dedicated design review provision, although at least two of these were in the early stages of establishing a panel and others periodically commission external providers on an ad hoc basis to review particularly significant schemes. Interviewees that had either worked for these Boroughs or who had served on such ad hoc panels were unanimous that such models were sub-standard because of the lack of consistency in panel membership and the associated lack of local contextual knowledge: ‘One-off panels don’t develop a relationship with the local authority. The most successful panels are those that are bespoke to the need of the local authority’.

The exception to this is when local authorities have had a financial interest themselves in a development, for example, if they have a land holding. In such cases in order to avoid perceptions that they are reviewing their own schemes, Boroughs sometimes use an independent third-party panel. In a few such cases Boroughs have sanctioned panels being commissioned directly by developers with no corresponding head contract with the authority. This raises issues about who should be in the driving seat, how independent developer commissioned (as opposed to just paid for) panels are likely to be, and consequently whether local authorities should accept their advice. This is a matter of all-important perceptions as well as reality.

The question of independence

The question of independence presented different challenges for panels depending on whether they were internally or externally managed. In-house panels in particular were sometimes perceived by developers to be too close to the planning authority; as one characterized them: ‘led by the planners and it doesn’t feel like an unbiased review’. Some authorities clearly wished to keep a tighter rein on their panel than others, and this situation was compounded in the rare circumstances that local politicians sat as panellists. Most interviewees felt that such practices should be avoided and that in-house panels have to work harder to ensure that panel members know that their feedback should be unbiased and impartial.

Criticism was also levelled at external providers whose model of operation increasingly has them being paid by developers directly to deliver a design review service, albeit at the instruction of, and as required by, the requisite Borough with whom they (typically) hold a head contract. Some felt that at times this relationship between design review providers and developers could become too close. As one developer who could remember the pre-market era in design review services commented: ‘You used to get proper nervous before a design review and now, it’s a cosy chat because it’s being paid for by the client’. Whilst this view was not widely held, the research revealed the disturbing case of one unhappy design team (in the early days of the market system) complaining about the review that they had received and being offered a second one by the external provider: ‘From being told that it was a terrible design, we were told it was a rather good design and they were looking forward to it happening’.

To avoid such situations, interviewees were clear that independence requires that a distance be maintained between the panel and panel managers and developers (and their teams) at all times. As a minimum this seems to require that, even if paid directly by a developer, the client for the review remains the public sector. It also means that panels should avoid getting too close to an applicant’s scheme. Most agreed that even if they have watched a scheme develop through successive reviews, they still need to be able to say ‘no’ at the end of the process if that is necessary. To circumvent problems, panels routinely establish conflict of interest provisions for panel members, with the most transparent maintaining a register of interests to record clients with whom panel members have worked (typically within a five-year period) and whose projects they are therefore unable to review.

Despite this, some interviewees argued that the world of large London developers and large consultancy practices is a small one and so some conflicts are to be expected. Moreover, with architects assessing other architects’ work in an environment where they often know each other well, expecting to get completely unbiased advice might be unrealistic. Such perceptions were exacerbated by developers and their design teams being generally unaware of the conflict of interest provisions that govern most panels. Consequently, many were concerned by what they saw as a lack of attention to such issues. In order to address such perceptions, panels may need to be far more explicit about their conflict of interest provisions, including being clear with applicants (as well as panellists) about the provisions that are already in place.

The question of openness

The detailed research in London proved difficult to conduct for a number of reasons, foremost among them being the secretive practices of some panels. In part this reflects the commercial imperatives of market players, but also extends beyond these service providers to the Boroughs for whom design review is mainly being conducted. Thus, in a context where some of London’s large regeneration projects are proving controversial, many councils are happy not to expose their design review processes to scrutiny. For example, it was often not clear who was responsible for giving permission to conduct research on design review: the providers (who run the panel), the Borough (who commission it) or the applicants (who pay for it). This complexity means that in the new fragmented context for design review, it is all too easy to hide from the public gaze.

The research challenges also reflect a larger reality implicit in the move to a competitive market. Thus as design review has evolved in recent years, it has moved away from some of the founding principles that since the days of publicly funded design review have been laid down to govern its practices. These are currently summarized in the 10 principles of design review encompassed in Design Review, Principles and Practice Footnote15 which states that design review should be: independent, expert, multidisciplinary, accountable, transparent, proportionate, timely, advisory, objective and accessible.

These principles are widely publicized on the websites of design review service providers and in the terms of reference of panels, but whilst the research confirmed that eight of the principles are routinely being delivered (or it is the aspiration to do so), on two there is little or no attempt to comply. The large majority of panels are patently not ‘transparent’ or ‘accessible’ by any standard that would be recognized as acceptable to meet national standards for public life.Footnote16 As the methodological challenges confirmed, this is something that is a lot more difficult now, with a multiplicity of commercially aware providers, than in the past.

The costs and benefits of design review

With the widespread move from a publicly-funded service in England to a chargeable one, the headline fees of panels have been much debated. However, these are only part of the total cost of design review and, in reality, whether a design review process is charged for or not, it is never free.

Headline costs

By 2017, the headline fees paid by developers to have their projects submitted to design review in London varied significantly: from £0 to £5000 (plus VAT) for a single full review. The average of those included in the research (excluding those that did not charge) was £3670. Fees are typically reduced by approximately £500 for a return review (when a site visit is not required) and are less for a shorter and smaller ‘Chair’s review’ (on average £1500 cheaper than for a full review). Whilst information on fees is often no longer fully transparent, these fees were significantly less than the much larger fees originally envisaged by Design Council CABE, although more than the estimated average cost of £2500 per review in the days before national funding was withdrawn from CABE (see above).Footnote17 There was no evidence that, as a category, external private, external not-for-profit or in-house panels necessarily cost more or less to run, or levied higher or lower fees than panels in a different category.

Costs typically built into fees include paying the Chair and panel members (from £200 to £400 per half day), refreshments, room hire, travel and the hours spent organizing the review, preparing the briefing notes, getting the information ready, attending the review and writing it up. In other words, they cover all the directly incurred management costs of the organization responsible for setting up and running the reviews, plus a profit in the case of external suppliers and an overhead for some in-house suppliers. For example, the manager of one in-house service confirmed that the funding raised through design review is being used to support additional in-house urban design capacity that otherwise would not be available. Perhaps to compete, at least one external provider is actively exploring a new model with its local authority clients through which ‘the developer over pays’. This, they argue, ‘enables local authorities to put together a pot which they can then use for any other design services they want to buy from us, whether it’s capacity training, masterplanning, or a broader review of an area, they can choose any service they want’.

Design review is clearly seen by some as an area with revenue-raising potential beyond that needed to deliver the reviews alone. By way of comparison, for panels that are free to applicants there is no ring-fenced funding coming in for design review and so all public funding allocated to the service is seen as a cost that relates largely to the time spent by officers organizing, conducting and writing up design review. This contrasts with the free service for public realm schemes offered by Transport for London which represents a benefit from one public sector organization to others (the Boroughs), with the costs internalized within the public sector.Footnote18 Boroughs certainly seem to appreciate the support. As one public realm project manager stated: ‘For us, as a free service, it’s amazing. … The only other way we could do that is by paying a consultant to advise us’.

Hidden costs

Even if panels are fully paid for by the developer, there are still likely to be hidden costs for the public sector. As one case officer argued: ‘A lot of my time is spent on design review – preparing for it, attending and dealing with the implications of its recommendations – which is not costed as part of that service’. Another suggested that ‘design review is often the tip of the iceberg in work terms. A huge amount of work goes into pre-application advice on design’, but that this would most probably be even greater if design review was not there to assist. The comment suggests that there are potential workload compensations to be had as well.

For their part, developers and their teams were subject to two sets of substantial ‘hidden’ costs. Most design teams put a significant amount of work into preparing for design review. Indeed, as one panel manager admitted: ‘Whilst we stress that no additional design work should take place specifically for a review, one of the key hidden costs of the process is the work of a design team to prepare and attend a design review, and this is a cost met by the developer’. A prominent designer revealed: ‘You’re asking for trouble if you don’t see design review as a very serious and important milestone, so we would always put a lot of resources into ensuring that the design is in the best possible place, going into a review’.

Further costs are almost inevitably associated with the post-review period. These are inconsistent and depend on the nature of the scheme and how well the design was resolved before going into review. Almost inevitably a panel’s recommendations will lead to further design costs, to potential delays to the development process, and to costs associated with the ongoing dialogue required to keep planners informed about how a project is responding to the review. These costs are likely to dwarf those paid to the provider of the design review.

Value and benefits

When asked about whether the costs of design review represented value for money, overwhelmingly interviewees felt they did, seeing multiple benefits to the practice (see ), although to varying degrees. Developers were the most sceptical, and believed that the process needed to demonstrate that it was adding value in order to justify its continuing role, and this meant economic and not just societal value. In this regard, design review can often work against maximizing the development potential on sites (in London notably by reducing heights and densities), but developers generally felt it was ‘a necessary evil to get planning permission in a timely manner’ care of a smoother and more streamlined planning process.

Table 4. The multiple potential benefits of design review as seen by stakeholdersFootnote21 .

Panel managers and local politicians (councillors) were (unsurprisingly) particularly supportive, arguing that: ‘When done well, design review is highly efficient, and it often saves time and money. The cost of the service is never more than a small proportion of the total development budget and is massively outweighed by the value it adds’; in effect, ensuring that projects meet the public interest as well as the private one. As one long-serving manager of a panel confided: ‘No-one has ever, in all my roles, ever quibbled about the cost of a design review – it’s not a problem’.

Despite this, there was a lingering sense that the public sector remained to be convinced about the importance of design locally, and this lay behind the general unwillingness to pay for such services out of the public purse. As one interviewee suggested: ‘If the nation thinks that design reviews are valuable, it has to pay for it and if it doesn’t want to pay for it, that’s a commentary on its views about the importance of design’. All stakeholders accepted that design was important, but some would prefer what they saw as the certainly of a properly resourced and staffed design capacity within local planning authorities instead of reference to an external design review panel. All accepted that the process did improve design, albeit at a cost.

Conclusion

This paper has explored three key issues: how processes of marketization in English design governance (specifically design review) have come about, what are the characteristics of this new model, and how it is performing. Evidence from three research projects revealed that design review in England has been on a significant journey. Coming out of the days (pre-2011) when design review was a state-led, state funded, but also somewhat exceptional activity, the new market in design review services is making the activity both more widespread, but also more varied in its practices. Most seem to feel that this journey has been a positive one, leading to greater innovation in the sector and to a less paternalistic (top down) character to design reviews. It is certainly encouraging a greater uptake of the practice, which, for advocates of the tool, must be regarded as a success. As one seasoned panelist commented:

This new generation of panels, particularly the professionalized, paid for, tailored, borough or authority specific panels, is becoming a very good model, which I think has been much better received than the old style “let’s parachute in a bunch of experts” who will pontificate and then they’ll clear off.

A market, successful but imperfect

What is clear is that today there is no single panel or set of practices that can be pointed to as ‘the’ exemplar to which all others should look. Instead, as Schuster (Citation2005) contends, panels operate in many different ways – like a jury, as peer reviewer, regulator, mediator, educator etc. – often simultaneously. The question is, does (or should) design review also operate like a business?

The experience in England has suggested that there is no ‘practical’ reason why not. Indeed, the marketization of design governance through design review (with encouragement in national policy) seems to be delivering more design review than ever before with no obvious diminution of standards. Instead, it is widely recognized as improving standards of design, establishing a more positive environment within which good design can flourish, and encouraging a more efficient development process that is more formative and less summative in its critique ‒ all for a price that the market is willing to pay.

In reality, the situation in England (most notably in London) is not a pure market for design review. Instead, we have witnessed a hybrid model of marketization with providers that range from purely private to purely public, and everything in-between. There are also limited numbers of market players, suggesting that (in economic terms) what has been witnessed is more akin to an oligopoly rather than a completely free market. Yet, despite the obvious limitations that such a system can place on achieving a competitive marketplace, for the clients of these services (the local authorities in England) there is always the option to eschew the market altogether and set up their own in-house panel, in the process taking the income and resources for themselves. The churn in the London market suggests that this has started to occur. In turn, this has meant that the providers of design review services have had to work hard for business, and that the prices they are able to charge have been considerably constrained.

A successful market does seem to be operating, but it is small, specialized and not nearly as lucrative as some had hoped it would be at the start. Nor do its somewhat secretive practices help in marketing this design governance tool more widely and encouraging the practice of design review to grow. Indeed, until the publication of the research on which the latter part of this paper is based (see Carmona Citation2018), there had been no systematic attempt to share experiences and practices or to establish a learning culture between organizations in a manner that will benefit all the protagonists in the design review field, no matter whether consumers or suppliers of the service. In this respect the public interest raison d’etre has withered, although ultimately the whole process almost always occurs at the behest of the public sector and advice is proffered with the public interest of achieving better design firmly to the fore.

Is it transferable?

Turning to the last issue posed at the front end of this paper concerning the wider application of the practices discussed beyond the UK and beyond design review, the situation in England provides a rare example of the marketization of design governance services, although one that may have limited application beyond the UK or to other tools of design governance (Carmona Citation2016). This is because, in England, design review is delivered through an informal (discretionary) but integrated process, within a strong national policy framework. In other words, for the market to work, there needs to be enough flexibility in the system to enable parallel, competing and non-binding models and providers of design review to operate. However, there also needs to be enough authority and/or incentive to ensure that developers feel it is in their best interest to participate (and pay for design review), and municipalities that they should back its provision through ensuring that it occurs.

It is therefore possible to conclude with a hypothesises that the marketization of design governance is most likely to occur and be successful (delivering on the multiple potential benefits of design review listed in ) through an informal design review model operating within an integrated system of design decision making, but one with enough force and, crudely, enough business to sustain it. It seems no accident that in England this has occurred most rapidly and with the greatest degree of innovation in London, precisely where the concentration of development and municipal authorities (the Boroughs) and therefore market opportunities are greatest. The regions are following more slowly behind with providers often serving large geographic territories in order to generate enough business. Where London leads, others are likely to follow; in other words, towards a rapidly maturing market in design review across the country.

In London, design review has received a further strong endorsement in the wording of the 2017 draft London Plan which (when adopted) will require that any proposals referred to the MayorFootnote19 should be subject to design review (Policy D2). No preference is expressed by the Mayor concerning how design review should be delivered, although providers should comply with new Mayoral guidance in a London Quality Review Charter (Mayor of London Citation2018), and notably that such processes should be independent and transparently delivered, suggesting that at least some common recognition of standards is desirable.

At the same time, even in London the external providers of design review have found it tough to market other design governance services on the back of design review. Therefore, whilst there have been concerted attempts by some providers to upsell to their clients, none of the other informal tools of design governance that were so compelling under the auspices of publicly funded CABE (see Carmona, de Magalhaes, and Natarajan Citation2017, Citation2018) have been saleable to nearly the same degree. It is clearly possible to successfully marketize aspects of design governance, but that does not absolve the public sector’s ultimate responsibility for the design of place, and without the public sector creating the demand, there will surely never be any supply.

Acknowledgments

The author would like to give particular thanks to Valentina Giordano and Wendy Clarke for their invaluable assistance throughout the various research projects on which this paper is based. This work has been part funded by the Arts and Humanities Research Council.

Disclosure statement

No potential conflict of interest was reported by the author.

Additional information

Funding

This work was supported by the Arts and Humanities Research Council [AH/J013706/1].

Notes

1. This represents a tiny percentage of the 644,000 planning applications received annually in England that year, although most of those (in excess of 90%) were for relatively minor development projects, e.g., household extensions that would not warrant design review (DCLG Citation2013).

2. How robustly this was determined remains unclear.

3. The Design Council was founded in 1944 to promote better standards of design in British industry.

4. For example, planning and development services were cut by 43% between 2010 and 2012 – http://www.ifs.org.uk/budgets/gb2012/12chap6.pdf .

5. http://www.designcouncil.org.uk/our-services/built-environment-cabe – these charges are no longer published online.

7. Others were to join later.

8. Income into built environment related activities was £700,000 and expenditure (including overheads) was £1,400,000.

9. Funded directly by Transport for London.

11. By the end of the 2013/2014 financial year, the Design Council was confident enough in the future of the new entity to incorporate Design Council CABE wholly within the operations of the main charity and to disband the subsidiary company that had been set up, ensuring that CABE operations were now fully integrated with the Design Council and operated on a not-for-profit basis.

12. Those most often mentioned included: Design Council CABE; Design South East; Places Matter!; OPUN; Cambridgeshire Quality Panel; Integreat Plus; Design North East; and BOB-MK.

13. Residential sites of over 0.5 hectares or 10 units, or sites of over 1 hectare or 1000 sq metres of floorspace for all other uses.

14. Outside of London, at least one private provider in the South West region actively markets themselves as a budget provider: ‘providing a larger pool of multidisciplinary experts and a larger panel of experts at each session, at around half the price of other regional “not-for-profit” Panels’ (Braddick Citation2018).

17. Although it is not clear what was included in this figure and how representative it was of the actual costs if all organizational overheads were taken into account.

18. It is also paid for in part through the Borough’s subscriptions to Urban Design London who manage the service.

19. In London, referable developments include: development of 150 residential units or more, development over 30 metres in height (outside the City of London), and development on Green Belt or Metropolitan Open Land.

20. All quotes from stakeholders interviewed during the course of the research.

21. All quotes from stakeholders interviewed during the course of the research.

References