Failures and breakthroughs – exposed, reflected, considered

Bayes craze, neural networks and uncertainty

leave a comment »

Story, context and hype

Named after its inventor, the 18th-century Presbyterian minister Thomas Bayes, Bayes’ theorem is a method for calculating the validity of beliefs (hypotheses, claims, propositions) based on the best available evidence (observations, data, information). Here’s the most dumbed-down description: Initial/prior belief + new evidence/information = new/improved belief.

P(B|E) = P(B) X P(E|B) / P(E), with P standing for probability, B for belief and E for evidence. P(B) is the probability that B is true, and P(E) is the probability that E is true. P(B|E) means the probability of B if E is true, and P(E|B) is the probability of E if B is true.

Since recently, Bayesian theorem has become ubiquitous in modern life and is applied in everything from physics to cancer research, psychology to ML spam algorithms. Physicists have proposed Bayesian interpretations of quantum mechanics and Bayesian defences of string and multiverse theories. Philosophers assert that science as a whole can be viewed as a Bayesian process, and that Bayesian approach can distinguish science from pseudoscience more precisely than falsification, the method popularised by Karl Popper. Some even claim Bayesian machines might be so intelligent that they make humans “obsolete.”

Bayes going into AI/ML

Neural networks are all the rage in AI/ML. They learn tasks by analysing vast amounts of data and power everything from face recognition at Facebook to translation at Microsoft to search at Google. They’re beginning to help chatbots learn the art of conversation. And they’re part of the movement toward driverless cars and other autonomous machines. But because they can’t make sense of the world without help from such large amounts of carefully labelled data, they aren’t suited to everything. Induction is prevalent approach for learning methods and they have difficulties dealing with uncertainties, probabilities of future occurrences of different types of data/events and “confident error” problems.

Additionally, AI researchers have limited insight into why neural networks make particular decisions. They are, in many ways, black boxes. This opacity could cause serious problems: What if a self-driving car runs someone down?

Regular/standard neural networks are bad at calculating uncertainty. However, there is a recent trend of bringing in Bayes (and other alternative methodologies) into this game too. Currently, AI researchers, including those working on Google’s self-driving cars, started employing Bayesian software to help machines recognise patterns and make decisions.

Gamalon, an AI startup that went life earlier in 2017, touts a new type of AI that requires only small amounts of training data – its secret sauce is Bayesian Program Synthesis.

Rebellion Research, founded by the grandson of baseball grand Hank Greenberg, relies upon a form of ML called Bayesian networks, using a handful of machines to predict market trends and pinpoint particular trades.

There are many more examples.

The dark side of Bayesian inference

The most notable pitfall of Bayesian approach is the calculation of prior probability. In many cases, estimating  the prior is just guesswork, allowing subjective factors to creep into calculations. Some prior probabilities are unknown or don’t even exist such as multiverses, inflation or God. In this way, Bayes’ theorem can promote pseudoscience and superstition as well as reason.

In 1997, Microsoft launched its animated MS Office assistant Clippit, which was conceived to work on Bayesian inference system but failed miserably .

In law courts, Bayesian principles may lead to serious miscarriages of justice (see the prosecutor’s fallacy). In a famous example from the UK, Sally Clark was wrongly convicted in 1999 of murdering her two children. Prosecutors had argued that the probability of two babies dying of natural causes (the prior probability that she is innocent of both charges) was so low – one in 73 million – that she must have murdered them. But they failed to take into account that the probability of a mother killing both of her children (the prior probability that she is guilty of both charges) was also incredibly low.

So the relative prior probabilities that she was totally innocent or a double murderer were more similar than initially argued. Clark was later cleared on appeal with the appeal court judges criticising the use of the statistic in the original trial. Here is another such case.

Alternative, complimentary approaches

In actual practice, the method of evaluation most scientists/experts use most of the time is a variant of a technique proposed by Ronald Fisher in the early 1900s. In this approach, a hypothesis is considered validated by data only if the data pass a test that would be failed 95% or 99% of the time if the data were generated randomly. The advantage of Fisher’s approach (which is by no means perfect) is that to some degree it sidesteps the problem of estimating priors where no sufficient advance information exists. In the vast majority of scientific papers, Fisher’s statistics (and more sophisticated statistics in that tradition) are used.

As many AI/ML algorithms automate their optimisation and learning processes, they can deploy a more careful Gaussian process consideration, including type of kernel and the treatment of its hyper-parameters, can play a crucial role in obtaining a good optimiser that can achieve expert level performance.

Dropout (which addresses overfitting problem), is another technique that has been in use for several years in deep learning, is another technique that enables uncertainty estimates by approximating those of Gaussian process. This is a powerful tool in statistics that allows model distributions over functions and been applied in both the supervised and unsupervised domains, for both regression and classification tasks. It offers nice properties such as uncertainty estimates over the function values, robustness to over-fitting, and principled ways for hyper-parameter tuning.

Google’s Project Loon uses Gaussian process (together with reinforcement learning) for its navigation.

Advertisements

101 and failures of Machine Learning

with one comment

Nowadays, ‘artificial intelligence’ (AI) and ‘machine learning’ (ML) are cliches that people use to signal awareness about technological trends. Companies tout AI/ML as panaceas to their ills and competitive advantage over their peers. From flower recognition to an algorithm that won against Go champion to big financial institutions, including ETFs of the biggest hedge fund in the world are already or moving to the AI/ML era.

However, as with any new technological breakthroughs, discoveries and inventions, the path is laden with misconceptions, failures, political agendas, etc. Let’s start by an overview of basic methodologies of ML, the foundation of AI.

101 and limitations of AI/ML

The fundamental goal of ML is to generalise beyond specific examples/occurrences of data. ML research focuses on experimental evaluation on actual data for realistic problems. ML’s performance is then evaluated by training a system (algorithm, program) on a set of test examples and measuring its accuracy at predicting the novel test (or real-life) examples.

Most frequently used methods in ML are induction and deduction. Deduction goes from the general to the particular, and induction goes from the particular to the general. Deduction is to induction what probability is to statistics.

Let’s start with induction. Domino effect is perhaps the most famous instance of induction. Inductive reasoning consists in constructing the axioms (hypotheses, theories) from the observation of supposed consequences of these axioms.Induction alone is not that useful: the induction of a model (a general knowledge) is interesting only if you can use it, i.e. if you can apply it to new situations, by going somehow from the general to the particular. This is what scientists do: observing natural phenomena, they postulate the laws of Nature. However, there is a problem with induction. It’s impossible to prove that an inductive statement is correct. At most can one empirically observe that the deductions that can be made from this statement are not in contradiction with experiments. But one can never be sure that no future observation will contradict the statement. Black Swam theory is the most famous illustration of this problem.

Deductive reasoning consists in combining logical statements (axioms, hypothesis, theorem) according to certain agreed upon rules in order to obtain new statements. This is how mathematicians prove theorems from axioms. Proving a theorem is nothing but combining a small set of axioms with certain rules. Of course, this does not mean proving a theorem is a simple task, but it could theoretically be automated.

A problem with deduction is exemplified by Gödel’s theorem, which states that for a rich enough set of axioms, one can produce statements that can be neither proved nor disproved.

Two other kinds of reasoning exist, abduction and analogy, and neither is frequently used in AI/ML, which may explain many of current AI/ML failures/problems.

Like deduction, abduction relies on knowledge expressed through general rules. Like deduction, it goes from the general to the particular, but it does in an unusual manner since it infers causes from consequences. So, from “A implies B” and “B”, A can be inferred. For example, most of a doctor’s work is inferring diseases from symptoms, which is what abduction is about. “I know the general rule which states that flu implies fever. I’m observing fever, so there must be flu.” However, abduction is not able to build new general rules: induction must have been involved at some point to state that “flu implies fever”.

Lastly, analogy goes from the particular to the particular. The most basic form of analogy is based on the assumption that similar situations have similar properties. More complex analogy-based learning schemes, involving several situations and recombinations can also be considered. Many lawyers use analogical reasoning to analyse new problems based on previous cases. Analogy completely bypasses the model construction: instead of going from the particular to the general, and then from to the general to the particular, it goes directly from the particular to the particular.

Let’s next check some of conspicuous failures in AI/ML (in 2016) and corresponding AI/ML methodology that, in my view, was responsible for failure:

Microsoft’s chatbot Tay utters racist, sexist, homophobic slurs (mimicking/analogising failure)

In an attempt to form relationships with younger customers, Microsoft launched an AI-powered chatbot called “Tay.ai” on Twitter in 2016. “Tay,” modelled around a teenage girl, morphed into a “Hitler-loving, feminist-bashing troll“—within just a day of her debut online. Microsoft yanked Tay off the social media platform and announced it planned to make “adjustments” to its algorithm.

AI-judged beauty contest was racist (deduction failure)

In “The First International Beauty Contest Judged by Artificial Intelligence,” a robot panel judged faces, based on “algorithms that can accurately evaluate the criteria linked to perception of human beauty and health.” But by failing to supply the AI/ML with a diverse training set, the contest winners were all white.

Chinese facial recognition study predicted convicts but shows bias (induction/abduction failure)

Researchers in China’s published a study entitled “Automated Inference on Criminality using Face Images.” They “fed the faces of 1,856 people (half of which were convicted violent criminals) into a computer and set about analysing them.” The researchers concluded that there were some discriminating structural features for predicting criminality, such as lip curvature, eye inner corner distance, and the so-called nose-mouth angle. Many in the field questioned the results and the report’s ethics underpinnings.

Concluding remarks

The above examples must not discourage companies to incorporate AI/ML into their processes and products. Most AI/ML failures seem to stem from band-aid, superfluous way of embracing AI/ML. A better and more sustainable approach to incorporating AI/ML would be to initiate a mix of projects generating both quick-wins and long-term transformational products/services/process. For quick-wins, a company might focus on changing internal employee touchpoints, using recent advances in speech, vision, and language understanding, etc.

For long-term projects, a company might go beyond local/point optimisation, to rethinking business lines, products/services, end-to-end processes, which is the area in which companies are likely to see the greatest impact. Take Google. Google’s initial focus was on incorporating ML into a few of their products (spam detection in Gmail, Google Translate, etc), but now the company is using machine learning to replace entire sets of systems. Further, to increase organisational learning, the company is dispersing ML experts across product groups and training thousands of software engineers, across all Google products, in basic machine learning.

 

The rise and demise of GM’s Saturn

leave a comment »

Not very long ago, on a cold, wintry day of January 1985 the top man at GM, Roger B. Smith, unveiled ‘Saturn’, the first new brand to come out of GM in almost seven decades. A stand-alone subsidiary of GM, Saturn had a promising birth and was touted as a ‘different’ type of a car. Having its own assembly plant, unique models and separate retailer network, Saturn operated independently from its parent company. It was a cut above the rest in using innovative technology and involving its employees in the decision making process. Conceived as a fighter brand to take on the Japanese brands, the small car of superior quality was the product of strong principles with a mission of being America’s panacea to Japan’s challenge. It reaffirmed the strength of American technology, ingenuity and productivity with the combination of advanced technology and latest approaches to management.

Though a revolutionary idea, Saturn wasn’t able to live up to the hype or the hopes of Roger Smith. The case of Saturn is definitely one for the books. Its marketing campaign fired up the public’s imagination and interest perfectly while the product was a miserable failure. Everything the company did was just another leaf out of the handbook of perfect PR. When the first lot of cars had a bad engine antifreeze, the company replaced the entire car instead of just the coolant much to the customer’s delight.

Besides clever marketing, Saturn’s biggest assets were its passionate employees and customer-centric approach which rewarded it with a quick victory. The victory was however short-lived as GM was reluctant to expand Saturn’s offerings for fear of cannibalization on the sales of its other divisions. For the existing models, Saturn’s engine had inferior motor mounts with the plastic dashboard panels giving it a cheap look and even the plastic-polymer doors, the so-called unique feature, failed to fit properly. Overall, the car neither had an identity nor a USP. To make things worse, Roger Smith was on a spending spree from throwing recall parties when vehicle problems were solved to hosting “homecoming” celebrations at plants. This saddled GM with high costs leading to increased doubts of Saturn’s survival among the leaders of GM.

Disaster struck further when Saturn’s sub-compact prices failed to cover the huge costs gobbled up by a dedicated plant with massive operating costs. The fact that the plant churned out cars that barely share any common parts with other GM brands did not seem to help at all. To top it all, at a time when buyers were snapping up minivans and SUVs, Saturn’s offerings were just limited to 3 small models for over a decade, thereby losing out on locking customers in. Just when GM was pondering over the decision of scrapping the car, the UAW visited one of Saturn’s production facility with its international contract, only to be rejected by the workers. As obvious as it seemed, the unique labor contract of the company was dissolved and GM had no choice but to part with the brand by dividing the production among other GM plants.

Automotive history has witnessed myriad failure stories of brands that were supposed to be world-class products but ended up biting the dust. One such underachiever brand was Vector which sprouted out of the aim of producing an American supercar but doomed due to cash flow issues, mismanagement and failing to keep up their insane promises. Sterling, Rover’s disguise into the American market, was another lost car of the 80s which most people haven’t even heard of. Their promise of delivering “Japanese reliability and refinement with traditional British luxury and class” couldn’t save them from continuous sales drop and combating competition from new Japan rivals. Few other epic automotive experimental failures which can be recalled in this scenario would include Chrysler’s TC by Maserati , Subaru SVX, Jaguar X-type, Lincoln blackwood, GMC Envoy XUV, Chevrolet SSR, Chrysler Crossfire and Dodge Durango Hybrid/Chrysler Aspen Hybrid. While some were design disasters, the others just couldn’t perform.

The automobile industry is governed by various factors which include the technology advancements of the time, economic conditions and fluctuations of consumer needs. The latest automotive chip on the block are the electric cars which are set to revolutionize the entire industry. LeEco, a Chinese electronics company is taking serious steps to target Tesla, what with it investing $1.08 billion on developing its debut electric car. Tesla was the name which paved the way for an electronic vehicle era. Whether LeSEE, LeEco’s concept sedan, can surpass Tesla’s performance and give them a run for their money is only something that time will tell. If successful, these electric cars could be the game changers of this century to usher in an electric future. If not, it will fade away and claim its place as a bittersweet memory on the list of flops that the industry has had.

Written by Hayk

April 8, 2017 at 4:18 am

Can technology fail humanity?

leave a comment »

Technology, a combination of two Greek words signifying ‘systematic treatment of art/craft/technique,’ is:

the collection of techniques, skills, methods and processes used in the production of goods or services or in the accomplishment of objectives..

Whether it was discovery of fire, building a shelter, invention of weapons – and in modern times – invention of Internet, microchips, etc., it has always been about inventing, discovering and using information, techniques and tools to induce or cause economic, scientific and social progress or improvement.

However, the progress that technology caused has neither been linear or impending or ubiquitous or even obvious. All Four Great Inventions of China happened before 12th century AD. But, on the other side, despite Hippocrates’ treatise (dating from 400 BC) that, contrary to the common ancient Greek belief that epilepsy was caused by offending moon goddess Selene, it had a cure in form of medicine and diet, 12th-14th century Christendom perceived epilepsy as the work of demons and evil spirits, and its cure was to pray to St. Valentine and other saints. And in many cases, progress of technology itself or its consequences have been a matter of pure chance or serendipity, whether it is penicillin, X-rays or 3M’s post-its.

So, ironic as it is, until recently, technology hasn’t been very systematic in its own progress, let alone its impact on society, economy and culture of nations. But it’s become a lot more systematic since the dawn of Information Age, last 60 or so years. Since microchips, computer networks and digital communication were invented (all in the US), the technology became more systematic in its own progress and it’s becoming more miniature, cheaperfaster and more ubiquitous than ever before in human history. Ubiquitous technology makes the world hyper-connected and digital. Whether it is our phones, thermostats, cars, washing machines, everything is becoming connected to everything.  It is thus no coincidence that California (Silicon Valley + Hollywood) has recently become the 6th largest economy in the world, thanks to its beaconing technological and creative progress embodied in last 60 or so years.

Trump era has begun in January 2017, and he already did more to damage any potential technological and scientific progress coming from the US than any of his predecessors. From trying to unreasonably curb immigration from Muslim countries to terminating TPP to undoing progress in transitioning to clean energy and again focusing on coal to disempowering OSTP, Trump wraps his decisions with firebrand rhetoric and well-thought out psychological biases (anchoring bias is his favourite) around one message: MAGA.  Hopes are turning to China as the next flag-bearer of technological progress.

Nowadays, even coffee-shops are hyper-connected, aiming to personalize our coffee-drinking experience. And thanks to its omnipresence and pervasiveness of Internet, wireless connections, telecommunications, etc., technology (smartphones, games, virtual worlds, 3D headsets, etc.) is becoming and end in itself. In countries and cities like Singapore, Hong Kong, New York, digital and smartphone addiction is already a societal problem causing unintended deaths, lack of maturity, loss of educational productivity, marriage breakups, to cite but a few. In Singapore, where according to recent research, Millennials spend an average of 3.4h/day on their smartphones, government is now putting in place policies and organizations to tackle this psychological addiction.

However, even Bernie Sanders knows that technology cannot and should not be an end in itself or an addiction. Could Internet and technologies fail? Could Internet and thinking linked to it spell the end of capitalism? Could it cause societies, cultures and nations to fail?

Technology has proven to fail itself and us when it became an end in itself.

Only when it stays true to its nature and acts as an enabler, a platform for human endeavors is when technology will succeed. It can even end poverty or other problems and issues human race is facing..

 

How HR departments fail companies from inside and outside

with one comment

Let’s start by analysing how HR departments sometimes “wrack havoc” on human resources of a company.

The infamous Fast Company article of 2005  “Why We Hate HR”  is as discussed and relevant as before. It trashes HR people as dull-witted pen pushers, “The human-resources trade long ago proved itself, at best, a necessary evil — and at worst, a dark bureaucratic force that blindly enforces nonsensical rules, resists creativity, and impedes constructive change. HR is the corporate function with the greatest potential — the key driver, in theory, of business performance — and also the one that most consistently underdelivers.

Opsss.

According to the same article, “a 2005 survey by consultancy Hay Group, just 40% of employees commended their companies for retaining high-quality workers. 41% agreed that performance evaluations were fair. 58% rated their job training as favorable…. Most telling, only about half of workers below the manager level believed their companies took a genuine interest in their well-being.” Only half of employees think HR cares about them?

HR staff are either perceived as harbingers of bad news or in “best case scenario” of doing a ‘useful’ activity, they are still a bureaucratic and legal bottleneck, usually slowing down operations and generating much negativity and pessimism among employees. Then I guess it shouldn’t surprise us that more and more companies observe more direct and stronger connections between employees and their managers as a result of eliminating the HR function. And one company – perhaps jaded from its previous HR department’s debilitating effect, hired one HR staff, but agreed on not calling her that – she goes without a title. Business is moving the other way, to reduce HR departments by outsourcing its paper-pushing functions; PriceWaterhouseCoopers estimates it can shave 15 to 25 percent off your HR costs. These humans are simply not resourceful enough.

Wow. Can it get any worse for the HR staff?

In one very public scandal, BBC’s HR manager Lucy Adams “was accused of presiding over ‘corporate fraud and cronyism’ over huge pay-offs to former executives” add a further insult to injury. Some of most notorious HR strategies such as PTOs, PIPs and performance reviews may even destroy a company.

It thus seems that existence of many HR departments defeats their very raison d’être as far ‘internal’ (i.e. inside a company) activities and corporate goals are concerned.

What about an HR department’s ‘external’ role, that of scouting for the best and the brightest? Company’s strategy is its culture (created by its employees), and its culture is its strategy, which is of course true and goes to say how essential it is to find the right people who would not only have skills-experience match but more importantly have a cultural fit for the company. Zappos, Pixar, Cirque du Soleil and others successful companies attribute their success primarily to their people.

Yet, despite the known and accepted fact that many applicants forge and offer polished cover letters and CVs, HR departments – the bigger/more famous the company, the bigger number of applicants apply for the company – continue to commit two essential mistakes:

  1. overly rely on data on CV/cover letter;
  2. look for as close a “literal” (as opposed to “big picture“) match to the job vacancy as possible.

In most cases, the first mistake yields much redundant work (for and by HR departments), disappointment (when, once accepted, it turns out the candidate didn’t have either good enough/pre-requisite skills or experience or was not a cultural fit), or lose (employee being fired or resigning shortly after joining the company).

The second mistake, equally or even more widespread, not only causes all the same problems, but, more importantly, discards candidates with profiles that are wider or somewhat different from the vacancy scope. In the modern age in which present and future belong to generalists, HR departments’ tunnel vision – the same tunnel vision that discredits HR as a department unable to see the big picture (company’s vision) nor assess or understand well enough business vision as to deserve a decision-making power inside the company – turns off many a qualified generalists (i.e. multidisciplinary people) or candidates with a wide cross-section of skills and experiences, who would have otherwise been (significantly) useful and thrived within the company.

Thus the conjunction of the two above-mentioned mistakes and standard HR internal practices end up costing the HR not only their reputation, but in a longer run, dissuade companies from the idea of having a dedicated HR department. Or, as my generalist friend Arnold suggested, given how standard matching algorithms work in general, it ain’t no big stretch to imagine that if HR continues on its current path, it will inevitably lead to HR function being automated via a software program with one of standard programs specially designed for that purpose.

Lastly, erroneous hire usually ends up being a waste of monetary, time and emotional investment both for a company and an (erroneously hired) employee, all the while as HR department is being paid to ‘recruit’ talent.

Modern saga “The Fox and the Hedgehog”: generalists vs. specialists

with 2 comments

About 2,700 years ago, Archilochus wrote that “The fox knows many things, but the hedgehog knows one big thing.” Taking that as a starting point, Isaiah Berlin’s 1953 essay “The Fox and the Hedgehog” contrasts hedgehogs that “relate everything to a single, central vision” with foxes who “pursue many ends connected … if at all, only in some de facto way.”

And so we have become a society of specialists with much heralded “learn more about your function, acquire ‘expert’ status, and you’ll go further in your career”  considered the corporate Holy Grail. But is it?

The modern corporation has grown out of the Industrial Revolution (IR). The IR started in 1712 when an Englishman named Thomas Newcomen invented a steam-driven pump, to pump water out of a mine, so the English miners could get more coal to mine, rather than hauling buckets of water out of the mine. That was the dawn of the IR. It was all about productivity, more coal per man-hour; and then it became more steel per man-hour, more textiles per man-hour, etc.

The largest impact of the IR was the “socialization” of labor. Prior to the IR, people were largely self-sufficient, but the IR brought increased division of labor, and this division of labor brought specialisation, which brought increased productivity. This specialisation, though, decreased self-sufficiency and people became increasingly inter-dependent on one another, thus socialised more. Also, with the division of labor the individual needed only to know how to do a specific task and nothing more. Specialization also caused compartmentalization of responsibility and awareness. On a national level, it has allowed nations to become increasingly successful while the citizens become increasingly ignorant. Think an average American. You can be totally wrong about almost everything in life, but as long as you know how to do one thing good you can be a success, and in fact in a society such as this increased specialization becomes advantageous due to the extreme competition of our society. Environments with more competition breed more specialists.

But is the formula that ushered humanity in 20th century of rapid technological industrialisation and economic development still valid or as impactful in 21st century as it was for last 300 years? In our modern VUCA world, who (specialist OR generalists) have a better chance of not only surviving but thriving?

According to a number of independent research papers, employees most likely to come out on top of companies and becoming successful in long term are generalists—but not just because of their innate ability to adapt to new workplaces, job descriptions or cultural shifts. For example, according Carter Phipps (author of Evolutionaries) generalists (will) thrive in a culture where it’s becoming increasingly valuable to know “a little bit about a lot.” More than half of employees with specialist skills now consider their job to be mostly generalist despite the fact that they were employed for their niche skills, according to another survey. Among the survey respondents, 60% thought their boss was a good generalist, and transferable skills – such as people skills and leadership – are often associated with more senior roles.

We’ve become a society that’s data (from all various specialisation, industries and technologies) rich and meaning poor. A rise in specialists in all areas — science, math, history, psychology — has left us with huge amount of data/info/knowledge but how valuable is it without context? Context in a data-rich world can only be provided by generalists whose breadth of knowledge can serve as the link between various disciplines/contexts/frameworks.

A good generalist, David Christian gave his 2011 TED talk called “Big History” of the entire universe from the big bang to present in 18 mins, using principals of physics, chemistry, biology, information architecture and human psychology.

To conclude, it seems that specialisation is becoming less and less relevant due to 1) increasing, interconnected and overlapping data and information that permeates all aspects of our lives, 2) increasing VUCA-ness of social, political and economic situations of individuals and nations, 3) need to envision and derive from a bigger context or connect few contexts/disciplines/frameworks. All points seem to be better addressed by generalists.

Singapore, Rousseau and the social contract

with 3 comments

In 1965, less than two years after joining Malaysia, Singapore was forced to leave the bigger country and declare its own independence.

Then its economy was in tatters. Lawlessness reigned. High levels of unemployment, lack of sanitation, short supply of potable water, and ethnic conflict were conditions that marred Singapore. About three million people, half of who were unemployed, occupied an island that was sandwiched between two large and unfriendly states: Malaysia and Indonesia. Ethnic Chinese and Malays were divided by race and language and often fought street battles.

Both economy and political situation were dire and mutually reinforcing.

1960s conventional wisdom in economics held that every nation, especially a small one, needed a hinterland to succeed. Singapore had none. The status-quo wisdom of development economists was that multinational corporations were great exploiters of cheap land, labor and raw material.

Forced to by all means to find work for their people, the leaders of Singapore engaged in promoting “globalization” before it became fashionable to do so.  The reason why Singapore embraced globalization one generation earlier than other third world countries was because it had no choice but go against the dependency theory that was the predominate economic thinking of then.

But globalization was but one sign of manifestation of a bigger picture. At the heart of the Singapore model is the social contract that was articulated between the ruling People’s Action Party (PAP) run government and the people of Singapore. In essence, it said that while the people were willing to accept more government control, give up some individual rights, and work hard, the government would create the environment that would deliver prosperity and a better quality of life.

The idea of social contract is not new.  Rousseau was among one of the most prominent theorists of social contract. In his view, the larger the bureaucracy, the more power required for government discipline. Normally, this relationship requires the state to be an aristocracy or monarchy (as far as he is concerned, both could be elected). Rousseau argues that the political authority (with which people are in social contract) will have two parts, sovereign (generic, legislative, representing the general will, which he defines as the rule of law) and government (particular, administrative day-to-day).

The autocratic dominance of the ruling PAP also provided confidence that national policies based on the social contract would remain stable in the short run, while continued efforts would be made to plan for Singapore’s long-term challenges.  And they did.

In the period of 1960-1999, Singapore had been able to achieve an average annual economic growth of 8%.  Singapore became one of the fastest-growing countries from 1970 to 2000, and the country has been classified as a ‘Growth Miracle’ and as an ‘Asian Tiger Economy’. As a result, World Bank officially classified Singapore as a “developed economy.”

The Singapore story is a thorn in the side of development specialists from the school of thought that Samuel Huntington has labeled as ‘convergence’ theorists, who believe that all desirable characteristics of national development (democracy, free markets, higher standards of living, etc.) reinforce one another. While democracy in Indonesia after Suharto and in the Philippines after Marcos has caused even more economic uncertainty and overall poverty, it has been the reign of an autocratic regime in Singapore that delivered economic development.

1. As the democratization of third world countries in Eastern Europe, Latin America and East Asia has shown over the last decade, being elected to office by the general populace provides no guarantee that national leaders will be free of corruption, effective, or dedicated to the national interest. In the case of even President Salinas of Mexico, a moderately respected elected president by Latin American standards, the national interest came second to his personal interest to keep the instability of the Mexican economy brewing while he changed jobs to become the head of the World Trade Organization (WTO). Contrary to the unanimous pushing by his economic advisors who were convinced that Mexican currency and financial markets could be saved from imminent collapse if an immediate devaluation of the currency was made before his retirement, Salinas did not act for fear of blotting his reputation. Like the Mexican example, the financial collapse of democratic Thailand and Russia in 1997 showed that elected leaders who come to power with substantial expectations on their shoulders after intense campaigning in which they promised significant national (social and economic) development, can never be immune from mortgaging the future of their people to finance grandiose if imprudent national projects that among other things, serve to enrich the cronies that helped in the outcome of the election in the first place.

2. Apart from effective governance, Singapore government exercises considerable discipline in managing its economic affairs. While PAP ran on a socialist platform to get elected, it was careful of which industries the government nationalized. Usually, the government did not intervene in markets it felt the private sector was doing a good job of meeting Singapore’s economical interests. This policy was outlined in a speech called “Survival” that former foreign minister, S. Rajaratnam, delivered in the early 1970s. In the speech, Rajaratnam told that the government supported state-run corporations like Singapore Airlines and Neptune Ocean Lines because the private sector did not have the ambition nor the financial backing to start such essential organizations that would make possible trade with the developed countries.

3. Importance that the government has attached to Singapore’s human resources development and the investments it has made in its own people. While the PAP ruthlessly crashed all independent labor unions and consolidated what remained into a union umbrella group called the National Trade Union Congress (NTUC), which it directly controlled, it did set up technical schools as well as paid foreign corporations to train unskilled workers for higher paying jobs in electronics, ship repair, and petrochemicals. For those who still could not get industrial jobs, the government enrolled the participation of the NTUC in creating labor intensive, “un-tradable” services, mostly for the purposes of tourism and transportation.

Written by Hayk

March 10, 2013 at 9:53 am