HomePeopleEthical engineering
Listen to this article

Professional registration is an important milestone for any technician or engineer. Awarded by licensed bodies, professional registration is based on knowledge, competence and commitment. The assessment for registration will examine candidates in a number of areas, including exercising their responsibilities in an ethical manner.

Candidates will have to demonstrate awareness and compliance with the engineering codes or value statements issued by the Engineering Council or their employer, including giving examples of where these have affected decisions or actions they have taken, such as stopping unsafe activities, preventing environmental damage or giving unwelcome messages to stakeholders such as clients or senior managers. This may also include actions taken which have had measurable economic implications.

Ethical decisions

The Engineering Council has created a statement of ethical principles to guide engineering practice and behaviour. The principles require engineering professionals to have a duty in the following four areas:

  1. Honesty and integrity – to uphold the highest standards of professional conduct including openness, fairness, honesty and integrity;
  2. Respect for life, law, the environment and public good – to obey all applicable laws and regulations and give due weight to facts, published standards and guidance and the wider public interest;
  3. Accuracy and rigour – to acquire and use wisely the understanding, knowledge and skills needed to perform their role;
  4. Leadership and communication – to abide by and promote high standards of leadership and communication.

The principles are commendable in any role but, in an age when increasing automation means that ethical decisions are being incorporated into complex systems using algorithms and rules, engineers are having to consider how machines behave in scenarios that they have not had to before. This applies to many spheres of engineering, including transport.

Autonomy in road vehicles is already challenging engineers, lawyers, insurers and others to rethink previously accepted principles and ideas. The introduction of artificial intelligence into railway control systems will create even more ethical issues that will have be addressed.

The Institute of Electrical and Electronics Engineers (IEEE) and the Institution of Railway Signal Engineers (IRSE) recently held a seminar to explore the underpinning principles associated with ethical issues in transport engineering and to suggest ways to address them.

Sir Charles Haddon-Cave presenting his keynote address.
Sir Charles Haddon-Cave presenting his keynote address.

Sir Charles Haddon-Cave is a judge serving in the Queen’s Bench Division of the High Court of England and Wales and has been involved in the fields of aviation, insurance, travel law and arbitration. He was responsible for the damning report into the crash of RAF Nimrod XV230 over Afghanistan in 2006 in which he was scathing about the money-saving edict that took priority over safety: “Unfortunately, the Nimrod Safety Case was a lamentable job from start to finish. It was riddled with errors. It missed the key dangers. Its production is a story of incompetence, complacency, and cynicism.” The report is available as a free download and is recommended reading for any engineer involved in safety engineering.

His keynote address set the tone for the seminar. Mr Justice Haddon-Cave explained that engineers faced a challenge in designing driverless cars, driverless trains, drones, intelligent buildings and robots so that they operate in a way that reflect human values and principles. As Franklin D Roosevelt once said: “Rules are not necessarily sacred, but principles are.”

He commended the codes of conduct issued by professional institutions, but questioned if they go deep enough and help engineers when they may be faced with really difficult ethical and moral decisions. Mr Justice Haddon-Cave said that there are other tools to help, with one being ALARP (as low as reasonably practicable), which is deeply embedded into common law and questions whether an action is reasonable, given all the facts that have to be taken into account.

Making sure a risk has been reduced, ALARP is about weighing the risk against the resource (termed ‘sacrifice’ in law) required to reduce it further. The decision is weighted in favour of health and safety because the presumption is that risk reduction measures should be implemented. To avoid having to implement an action, the sacrifice must be grossly disproportionate to the benefits of risk reduction that would be achieved.

Another ethical tool that Mr Justice Haddon-Cave demonstrated was the Heinrich Triangle Theory. Heinrich proposed that, for every major injury, loss or event, there are 29 minor and 300 no-injury accidents, losses or events. So, ethically, to reduce a major risk, it is necessary to investigate and eliminate the greater number of minor and no-injury accidents or losses. Or, to put it another way, just don’t look at the tip of the iceberg, think about what’s below the surface.

He recommended the adoption of four key ethical principles:

  1. Leadership – strong clear leadership from the very top;
  2. Independence throughout the regulatory regime;
  3. People (not just process and paper);
  4. Simplicity – regulation, processes and rules must be as simple and straightforward as possible.

Mr Justice Haddon-Cave finished his keynote speech by emphasising that any safety management system must be made simple, and the greatest risk to safety and ethical engineering is complexity.

Heinrich's safety pyramid, which dates from 1931 and was verified and updated by Frank Bird in the 1970s, is based on insurance company claim records.
Heinrich’s safety pyramid, which dates from 1931 and was verified and updated by Frank Bird in the 1970s, is based on insurance company claim records.

Hindsight bias

Professor George Bearfield, director of system safety and health at RSSB, said that dealing with complex safety engineering in rail can already be an ethical minefield as investment decisions have to be made over very long time-frames over which the political, ethical and social concern and tolerance standards may change. Where safety or accident risk is involved, the tensions will be high and decisions are often governed by what is affordable and the balance of risk.

When making ethical based decisions, one of the traps that can occur is ‘hindsight bias’. This is the inclination to see past events, such as accidents, as being more predictable than they really were. If such events are seen as being predictable – an accident waiting to happen – it places great importance on the ability of a transport operator to argue that they had appropriate safety measures in place.

Hindsight bias can influence knee-jerk reactions. How many times have we heard, after a major accident, a politician quickly say that “money is not a problem”? This is often an unethical statement as, in many cases, money will be a problem when ALARP is applied and it is determined that the money involved could be far better used to lower risk somewhere else.

Professor Bearfield recommended the RSSB document “Taking Safe Decisions – How Britain’s railways take decisions that affect safety” for anyone involved in safety management. This is available from the RSSB website and it discusses many of the topics relating to ethical engineering.

Ethics from a technologist’s perspective

Paul Campion, CEO of the Transport Systems Catapult, observed that society and transport engineering is about to face a huge challenge which will require ethical considerations. Publishing, music, finance, retail (home shopping) and some other industries have been fundamentally transformed by IT and communications, but significant changes to the transport sector have yet to take place.

The silo boundaries of transport are likely to be broken down and transformed by new technology, which will raise ethical questions. This is initially likely to be related to the use of data and the recent Cambridge Analytica and Facebook scandal has highlighted some of the ethical issues that can arise.

When actress Mary McCormack husband’s Tesla caught fire while he was driving in the Los Angeles area, her post on Twitter was shared 1.5 million times. However, conventional vehicles also catch fire and electric cars may be less likely to catch fire than petrol and diesel vehicles – it’s just that society will require higher standards with new technology.

It has been suggested that an autonomous vehicle will have to be many times safer than a manually driven vehicle for it to be accepted and allowed on the road. Consider the example of an autonomous car in a queue of cars joining a busy main road at a T-junction. Traffic starts to build up and slow down. Cars with drivers in front of the autonomous car, when at the front of the queue, ‘nudge forward’ and cars on the main road let them in. Should the autonomous car be programmed to do the same, or should it wait until the road is clear, which could take hours?

Should the autonomous car have a sliding scale of ‘caution or bold’ that could be selected by the client? If ‘caution’ is chosen, it could be there for hours, waiting. Select ‘bold’ and the risk of an accident increases. Does that sound safe and ethical?

Engineers of the future will face many of these issues. For example, what happens if a perfect autonomous car could be developed such that it will always act to avoid accidents, but other road users know this and start to deliberately pull out in front of the perfect autonomous car? Do engineers then deliberately make the autonomous car less safe? If an autonomous car has to act to avoid an accident what rules apply if the choice of action is to hit a pedestrian or another vehicle?

To be effective, the autonomous vehicle will have to be more human-like and make ethical-based decisions. It will have to be provided with artificial intelligence (AI) so that it will learn and adopt different behaviours, similar to a human. Let’s assume that a car can be taught to drive itself through AI. If it makes a mistake due to the way it has learned, who is to blame? The designer, programmer, tester, or the salesperson?

Some of these ethical issues with autonomous vehicles may also apply to driverless trains.

The use of autonomous vehicles, like this one recently launched by Keolis in Canada, raises many issue.
The use of autonomous vehicles, like this one recently launched by Keolis in Canada, raises many issue.

Unethical AI chatbot

An example of AI behaving unethically was Tay – a Microsoft ‘chatbot’ which used AI to respond to users’ queries and emulate casual, jokey speech patterns. However, when it began posting racist messages in response to questions, it quickly had to be shut down.

It was identified that Tay was vulnerable to people who persuaded it to use racial slurs and defend white-supremacist propaganda – even outright calls for genocide. The racism was not a product of Microsoft or Tay itself, as it was simply a piece of software that was trying to learn how humans talk in a conversation. It didn’t even know what racism was but spouted ‘unethical obscene language’ because racist humans on Twitter quickly spotted a vulnerability and exploited it. The problem was that Tay didn’t understand what it was talking about.

Microsoft’s developers didn’t include any filters on the words that Tay could or could not use and came under heavy criticism for the bot and its lack of filters, with some arguing (with great hindsight of course) that the company should have expected and pre-empted the abuse. Now imagine what unethical behaviours an AI safety-related system may be vulnerable to when interfacing with unethical humans.

Tay - Microsoft's chatbot - had to be shut down for 'unethical obscene language' it had learned from racist humans.
Tay – Microsoft’s chatbot – had to be shut down for ‘unethical obscene language’ it had learned from racist humans.

AI in railway control systems

Artificial Intelligence is an algorithm, mathematical model or software that can ‘learn’ what to do and improve its own performance over time, based on information from its own past performance. While deterministic software does exactly what it was told (by the programmer), AI software is only programmed with a learning mechanism – some kind of trial and error routine. This means the behaviour of AI software can never be completely foreseen, but only taught.

There are a few applications where a computer can’t have the opportunity to make mistakes. Safety is one of these, as a critical software mistake may result in loss of life. AI can therefore never be used to make the final decision, but that doesn’t mean that AI cannot be used in control systems as it can learn to be better than a human but, just like a human, it needs protecting.

If a signaller attempts to put two trains onto a collision course, the interlocking system will not authorise such a manoeuvre. Humans can fail, and AI should be treated in the same way. For some applications, programming and teaching AI can be a lot cheaper and quicker than classical logical programming. AI should therefore not be ignored or avoided in railway control systems. Just like any new technology, engineers must learn new skills to develop and adopt the new ways of working to create a better tomorrow.

IEEE P7000 – Model Process for Addressing Ethical Concerns During System Design.

Professor Ali Hessami is an expert in systems assurance and safety. Technical editor of IEEE standard P7000 – Model Process for Addressing Ethical Concerns During System Design – he represents the UK on CENELEC and IEC safety systems committees. As the discussions during the seminar had identified, engineers need a methodology for identifying, analysing and reconciling ethical concerns of end users.

Approximately 40 people are expected to be actively involved in the development of the P7000 standard, and the scope is to establish a process model by which engineers and technologists can address ethical consideration throughout the various stages of system initiation, analysis and design.

The purpose of the standard being produced by Ali Hessami and the IEEE is to provide engineers and technologists with an implementable method of aligning innovation management processes, system design approaches and software engineering methods to minimise ethical risk for their organisations, stakeholders and end users. It is planned for publication early in 2019 and will be the first global standard to guide ethical principles in engineering design.

Strong clear leadership from the very top was Mr Justice Haddon-Cave's first recommendation.
Strong clear leadership from the very top was Mr Justice Haddon-Cave’s first recommendation.

Apples or cookies

There was an extensive group discussion following the presentations and follow up sessions are being considered. The event reinforced the message that engineering professions must produce engineers who have the will and the intellectual capacity to engage with bigger questions about the ethics and social ramifications of their work, as human behaviour can easily slip into unethical actions.

Mr Justice Haddon-Cave told a story to illustrate this point. Children were lined up in the cafeteria of a Catholic elementary school for lunch. At the head of the table was a large pile of apples. The nun made a note, and posted it on the apple tray: “Take only ONE. God is watching.” Moving further along the lunch line, at the other end of the table was a large pile of chocolate chip cookies. A child had written another note: “Take all you want. God is watching the apples!”

Paul Darlington CEng FIET FIRSE
Paul Darlington CEng FIET FIRSEhttp://therailengineer.com

SPECIALIST AREAS
Signalling and telecommunications, cyber security, level crossings


Paul Darlington joined British Rail as a trainee telecoms technician in September 1975. He became an instructor in telecommunications and moved to the telecoms project office in Birmingham, where he was involved in designing customer information systems and radio schemes. By the time of privatisation, he was a project engineer with BR Telecommunications Ltd, responsible for the implementation of telecommunication schemes included Merseyrail IECC resignalling.

With the inception of Railtrack, Paul moved to Manchester as the telecoms engineer for the North West. He was, for a time, the engineering manager responsible for coordinating all the multi-functional engineering disciplines in the North West Zone.

His next role was head of telecommunications for Network Rail in London, where the foundations for Network Rail Telecoms and the IP network now known as FTNx were put in place. He then moved back to Manchester as the signalling route asset manager for LNW North and led the control period 5 signalling renewals planning. He also continued as chair of the safety review panel for the national GSM-R programme.

After a 37-year career in the rail industry, Paul retired in October 2012 and, as well as writing for Rail Engineer, is the managing editor of IRSE News.

2 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.