Failures and breakthroughs – exposed, reflected, considered

Archive for the ‘technological failures’ Category

Top 13 challenges AI is facing in 2017

with one comment

AI and ML feed on data, and companies that center their business around the technology are growing a penchant for collecting user data, with or without the latter’s consent, in order to make their services more targeted and efficient. Already implementations of AI/ML are making it possible to impersonate people by imitating their handwritingvoice and conversation style, an unprecedented power that can come in handy in a number of dark scenarios. However, despite large amounts of previously collected data, early AI pilots have challenges producing  dramatic results that technology enthusiasts predicted. For example, early efforts of companies developing chatbots for Facebook’s Messenger platform saw 70% failure rates in handling user requests.

One of main challenges of AI goes beyond data: false positives. For example, a name-ranking algorithm ended up favoring white-sounding names, and advertising algorithms preferred to show high-paying job ads to male visitors.

Another challenge that caused much controversy in the past year was the “filter bubble” phenomenon that was seen in Facebook and other social media that tailored content to the biases and preferences of users, effectively shutting them out from other viewpoints and realities that were out there.

Additionally, as we give more control and decision-making power to AI algorithms, not only technological, but moral/philosophical considerations become important – when a self-driving car has to choose between the life of a passenger and a pedestrian.

To sum up, following are the challenges that AI still faces, despite creating and processing increasing amounts of of data and unprecedented amounts of other resources (number of people working on algorithms, CPUs, storage, better algorithms, etc.):

  1. Unsupervised Learning: Deep neural networks have afforded huge leaps in performance across a variety of image, sound and text problems. Most noticeably in 2015, the application of RNNs to text problems (NLP, language translation, etc.) have exploded. A major bottleneck in unsupervised learning is labeled data acquisition. It is known humans learn about objects and navigation with relatively little labeled “training” data. How is this performed? How can this be efficiently implemented in machines?
  2. Select Induction Vs. Deduction Vs. Abduction Based Approach: Induction is almost always a default choice when it comes to building an AI model for data analysis. However, it – as well as deduction, abduction, transduction – has its limitations which need serious consideration.
  3. Model Building: TensorFlow has opened the door for conversations about  building scalable ML platforms. There are plenty of companies working on data-science-in-the-cloud (H2O, Dato, MetaMind, …) but the question remains, what is the best way to build ML pipelines? This includes ETL, data storage and  optimisation algorithms.
  4. Smart Search: How can deep learning create better vector spaces and algorithms than Tf-Idf? What are some better alternative candidates?
  5. Optimise Reinforced Learning: As this approach avoids the problems of getting labelled data, the system needs to get data, learn from it and improve. While AlphaGo used RL to win against the Go champion, RL isn’t without its own issues: discussion on a more lightweight and conceptual level one on a more technical aspect.

  6. Build Domain Expertise: How to build and sustain domain knowledge in industries and for problems, which involve reasoning based on a complex body of knowledge like Legal, Financial, etc. and then formulate a process where machines can simulate an expert in the field.
  7. Grow Domain Knowledge: How can AI tackle problems, which involve extending a complex body of knowledge by suggesting new insights to the domain itself – for example new drugs to cure diseases?
  8. Complex Task Analyser and Planner: How can AI tackle complex tasks requiring data analysis, planning and execution? Many logistics and scheduling tasks can be done by current (non-AI) algorithms. A good example is the use of AI techniques in IoT for Sparse datasets . AI techniques help this case because there are large and complex datasets where human beings cannot detect patterns but machines can do so easily.
  9. Better Communication: While proliferation of smart chatbots and AI-powered communication tools is a trend since several years, these communication tools are still far from being smart, and may at times fail at recognising even a simple human language.
  10. Better Perception and Understanding: While Alibaba, Face+ create facial recognition software, visual perception and labelling are still generally problematic. There are few good examples, like this Russian face recognition app  that is good enough to be considered a potential tool for oppressive regimes seeking to identify and crack down on dissidents. Another algorithm proved to be effective at peeking behind masked images and blurred pictures.
  11. Anticipate Second-Order (and higher) Consequences: AI and deep learning have improved computer vision, for example, to the point that autonomous vehicles (cars and trucks) are viable (Otto, Waymo) . But what will their impact be on economy and society? What’s scary is that with advance of AI and related technologies, we might know less on AI’s data analysis and decision making process. Starting in 2012, Google used LSTMs to power the speech recognition system in Android, and in December 2016, Microsoft reported their system reached a word error rate of 5.9%  —  a figure roughly equal to that of human abilities for the first time in history. The goal-post continues to be moved rapidly .. for example is building an avatar that can capture your personality. Preempting what’s to come, starting in the summer of 2018, EU is considering to require that companies be able to give users an explanation for decisions that their automated systems reach.
  12. Evolution of Expert SystemsExpert systems have been around for a long time.  Much of the vision of expert systems could be implemented in AI/deep learning algorithms in the near future. The architecture of IBM Watson is an indicative example.
  13. Better Sentiment Analysis: Catching up but still far from lexicon-based model for sentiment analysis, it is still pretty much a nascent and unchartered space for most AI applications. There are some small steps in this regard though, including OpenAI’s usage of mLSTM methodology to conduct sentiment analysis of text. The main issue is that there are many conceptual and contextual rules (rooted and steeped in particulars of culture, society, upbringing, etc of individuals) that govern sentiment and there are even more clues (possibly unlimited) that can convey these concepts.



Reinforcement Learning vs. Evolutionary Strategy: combine, aggregate, multiply

leave a comment »

A birds-eye view of main ML algorithms

In statistics, we have descriptive and inferential statistics. ML deals with the same problems and claims any problem where the solution isn’t programmed directly, but is learned by the program. ML generally works by numerically minimising something: a cost function or error.

Supervised learning – You have labeled data: a sample of ground truth with features and labels. You estimate a model that predicts the labels using the features. Alternative terminology: predictor variables and target variables. You predict the values of the target using the predictors.

  • Regression. The target variable is numeric. Example: you want to predict the crop yield based on remote sensing data. Algorithms: linear regression, polynomial regression, generalized linear models.
  • Classification. The target variable is categorical. Example: you want to detect the crop type that was planted using remote sensing data. Or Silicon Valley’s “Not Hot Dog” application.1 Algorithms: Naïve Bayes, logistic regression, discriminant analysis, decision trees, random forests, support vector machines, neural networks of many variations: feed-forward NNs, convolutional NNs, recurrent NNs.

Unsupervised learning – You have a sample with unlabeled information. No single variable is the specific target of prediction. You want to learn interesting features of the data:

  • Clustering. Which of these things are similar? Example: group consumers into relevant psychographics. Algorithms – k-means, hierarchical clustering.
  • Anomaly detection. Which of these things are different? Example: credit card fraud detection. Algorithms: k-nearest-neighbor.
  • Dimensionality reduction. How can you summarise the data in a high-dimensional data set using a lower-dimensional dataset which captures as much of the useful information as possible (possibly for further modelling with supervised or unsupervised algorithms)? Example: image compression. Algorithms: principal component analysis (PCA), neural network auto-encoders.

Reinforcement Learning  (Policy Gradients, DQN, A3C,..) – You are presented with a game/environment that responds sequentially or continuously to your inputs, and you learn to maximise an objective through trial and error.

Evolutionary Strategy – This approach consists of maintaining a distribution over network weight values, and having a large number of agents act in parallel using parameters sampled from this distribution. With this score, the parameter distribution can be moved toward that of the more successful agents, and away from that of the unsuccessful ones. By repeating this approach millions of times, with hundreds of agents, the weight distribution moves to a space that provides the agents with a good policy for solving the task at hand.

All the complex tasks in ML, from self-driving cars to machine translation, are solved by combining these building blocks into complex stacks.

Pro/cons of RL and ES

One step towards building safe AI systems is to remove the need for humans to write goal functions, since using a simple proxy for a complex goal, or getting the complex goal a bit wrong, can lead to undesirable and even dangerous behaviour.

RL is known to be unstable or even to diverge when a nonlinear function approximator such as a NN is used to represent the action-value (also known as Q) function. This instability has several causes: the correlations present in the sequence of observations, the fact that small updates to Q may significantly change the policy and therefore change the data distribution, and the correlations between the action-values and the target values.

RL’s other challenge is generalisation. In typical deep RL methods, this is achieved by approximating the optimal value function with a low-dimensional representation using a deep network. While this approach works well in many domains, in domains where the optimal value function cannot easily be reduced to a low-dimensional representation, learning can be very slow and unstable. T

Whereas RL methods such as A3C need to communicate gradients back and forth between workers and a parameter server, ES only requires fitness scores and high-level parameter distribution information to be communicated. It is this simplicity that allows the technique to scale up in ways current RL methods cannot. However, in situations with richer feedback signals however, things don’t go so well for ES.

Contextualising and combining the RL and ES

Appealing to nature for inspiration in AI can sometimes be seen as a problematic approach. Nature, after all, is working under constraints that computer scientists simply don’t have. If we look at intelligent behaviour in mammals, we find that it comes from a complex interplay of two ultimately intertwined processes, inter-life learning, and intra-life learning. Roughly speaking these two approaches in nature can be compared to the two in neural network optimisation. ES for which no gradient information is used to update the organism, is related to inter-life learning. Likewise, the gradient based methods (RL), for which specific experiences change the agent in specific ways, can be compared to intra-life learning.

The techniques employed in RL are in many ways inspired directly by the psychological literature on operant conditioning to come out of animal psychology. (In fact, Richard Sutton, one of the two founders of RL actually received his Bachelor’s degree in Psychology). In operant conditioning animals learn to associate rewarding or punishing outcomes with specific behaviour patterns. Animal trainers and researchers can manipulate this reward association in order to get animals to demonstrate their intelligence or behave in certain ways.

The central role of prediction in intra-life learning changes the dynamics quite a bit. What was before a somewhat sparse signal (occasional reward), becomes an extremely dense signal. At each moment mammalian brains are predicting the results of the complex flux of sensory stimuli and actions which the animal is immersed in. The outcome of the animals behaviour then provides a dense signal to guide the change in predictions and behaviour going forward. All of these signals are put to use in the brain in order to improve predictions (and consequently the quality of actions) going forward. If we apply this way of thinking to learning in artificial agents, we find that RL isn’t somehow fundamentally flawed, rather it is that the signal being used isn’t nearly as rich as it could (or should) be. In cases where the signal can’t be made more rich, (perhaps because it is inherently sparse, or to do with low-level reactivity) it is likely the case that learning through a highly parallelizable method such as ES is instead better.

Combining many

It is clear that for many reactive policies, or situations with extremely sparse rewards, ES is a strong candidate, especially if you have access to the computational resources that allow for massively parallel training.  On the other hand, gradient-based methods using RL or supervision are going to be useful when a rich feedback signal is available, and we need to learn quickly with less data.

An extreme example is combining more than just ES and RL and Microsoft’s Maluuba is a an illustrative example, which used many algorithms to beat the game Ms. Pac-Man. When the agent (Ms. Pac-Man) starts to learn, it moves randomly; it knows nothing about the game board. As it discovers new rewards (the little pellets and fruit Ms. Pac-Man eats) it begins placing little algorithms in those spots, which continuously learn how best to avoid ghosts and get more points based on Ms. Pac-Man’s interactions, according to the Maluuba research paper.

As the 163 potential algorithms are mapped, they continually send which movement they think would generate the highest reward to the agent, which averages the inputs and moves Ms. Pac-Man. Each time the agent dies, all the algorithms process what generated rewards. These helper algorithms were carefully crafted by humans to understand how to learn, however.

Instead of having one algorithm learn one complex problem, the AI distributes learning over many smaller algorithms, each tackling simpler problems, Maluuba says in a video. This research could be applied to other highly complex problems, like financial trading, according to the company.

But it’s worth noting that since more than 100 algorithms are being used to tell Ms. Pac-Man where to move and win the game, this technique is likely to be extremely computationally intensive.

Bayes craze, neural networks and uncertainty

leave a comment »

Story, context and hype

Named after its inventor, the 18th-century Presbyterian minister Thomas Bayes, Bayes’ theorem is a method for calculating the validity of beliefs (hypotheses, claims, propositions) based on the best available evidence (observations, data, information). Here’s the most dumbed-down description: Initial/prior belief + new evidence/information = new/improved belief.

P(B|E) = P(B) X P(E|B) / P(E), with P standing for probability, B for belief and E for evidence. P(B) is the probability that B is true, and P(E) is the probability that E is true. P(B|E) means the probability of B if E is true, and P(E|B) is the probability of E if B is true.

Since recently, Bayesian theorem has become ubiquitous in modern life and is applied in everything from physics to cancer research, psychology to ML spam algorithms. Physicists have proposed Bayesian interpretations of quantum mechanics and Bayesian defences of string and multiverse theories. Philosophers assert that science as a whole can be viewed as a Bayesian process, and that Bayesian approach can distinguish science from pseudoscience more precisely than falsification, the method popularised by Karl Popper. Some even claim Bayesian machines might be so intelligent that they make humans “obsolete.”

Bayes going into AI/ML

Neural networks are all the rage in AI/ML. They learn tasks by analysing vast amounts of data and power everything from face recognition at Facebook to translation at Microsoft to search at Google. They’re beginning to help chatbots learn the art of conversation. And they’re part of the movement toward driverless cars and other autonomous machines. But because they can’t make sense of the world without help from such large amounts of carefully labelled data, they aren’t suited to everything. Induction is prevalent approach for learning methods and they have difficulties dealing with uncertainties, probabilities of future occurrences of different types of data/events and “confident error” problems.

Additionally, AI researchers have limited insight into why neural networks make particular decisions. They are, in many ways, black boxes. This opacity could cause serious problems: What if a self-driving car runs someone down?

Regular/standard neural networks are bad at calculating uncertainty. However, there is a recent trend of bringing in Bayes (and other alternative methodologies) into this game too. Currently, AI researchers, including those working on Google’s self-driving cars, started employing Bayesian software to help machines recognise patterns and make decisions.

Gamalon, an AI startup that went life earlier in 2017, touts a new type of AI that requires only small amounts of training data – its secret sauce is Bayesian Program Synthesis.

Rebellion Research, founded by the grandson of baseball grand Hank Greenberg, relies upon a form of ML called Bayesian networks, using a handful of machines to predict market trends and pinpoint particular trades.

There are many more examples.

The dark side of Bayesian inference

The most notable pitfall of Bayesian approach is the calculation of prior probability. In many cases, estimating  the prior is just guesswork, allowing subjective factors to creep into calculations. Some prior probabilities are unknown or don’t even exist such as multiverses, inflation or God. In this way, Bayes’ theorem can promote pseudoscience and superstition as well as reason.

In 1997, Microsoft launched its animated MS Office assistant Clippit, which was conceived to work on Bayesian inference system but failed miserably .

In law courts, Bayesian principles may lead to serious miscarriages of justice (see the prosecutor’s fallacy). In a famous example from the UK, Sally Clark was wrongly convicted in 1999 of murdering her two children. Prosecutors had argued that the probability of two babies dying of natural causes (the prior probability that she is innocent of both charges) was so low – one in 73 million – that she must have murdered them. But they failed to take into account that the probability of a mother killing both of her children (the prior probability that she is guilty of both charges) was also incredibly low.

So the relative prior probabilities that she was totally innocent or a double murderer were more similar than initially argued. Clark was later cleared on appeal with the appeal court judges criticising the use of the statistic in the original trial. Here is another such case.

Alternative, complimentary approaches

In actual practice, the method of evaluation most scientists/experts use most of the time is a variant of a technique proposed by Ronald Fisher in the early 1900s. In this approach, a hypothesis is considered validated by data only if the data pass a test that would be failed 95% or 99% of the time if the data were generated randomly. The advantage of Fisher’s approach (which is by no means perfect) is that to some degree it sidesteps the problem of estimating priors where no sufficient advance information exists. In the vast majority of scientific papers, Fisher’s statistics (and more sophisticated statistics in that tradition) are used.

As many AI/ML algorithms automate their optimisation and learning processes, they can deploy a more careful Gaussian process consideration, including type of kernel and the treatment of its hyper-parameters, can play a crucial role in obtaining a good optimiser that can achieve expert level performance.

Dropout (which addresses overfitting problem), is another technique that has been in use for several years in deep learning, is another technique that enables uncertainty estimates by approximating those of Gaussian process. This is a powerful tool in statistics that allows model distributions over functions and been applied in both the supervised and unsupervised domains, for both regression and classification tasks. It offers nice properties such as uncertainty estimates over the function values, robustness to over-fitting, and principled ways for hyper-parameter tuning.

Google’s Project Loon uses Gaussian process (together with reinforcement learning) for its navigation.

101 and failures of Machine Learning

with one comment

Nowadays, ‘artificial intelligence’ (AI) and ‘machine learning’ (ML) are cliches that people use to signal awareness about technological trends. Companies tout AI/ML as panaceas to their ills and competitive advantage over their peers. From flower recognition to an algorithm that won against Go champion to big financial institutions, including ETFs of the biggest hedge fund in the world are already or moving to the AI/ML era.

However, as with any new technological breakthroughs, discoveries and inventions, the path is laden with misconceptions, failures, political agendas, etc. Let’s start by an overview of basic methodologies of ML, the foundation of AI.

101 and limitations of AI/ML

The fundamental goal of ML is to generalise beyond specific examples/occurrences of data. ML research focuses on experimental evaluation on actual data for realistic problems. ML’s performance is then evaluated by training a system (algorithm, program) on a set of test examples and measuring its accuracy at predicting the novel test (or real-life) examples.

Most frequently used methods in ML are induction and deduction. Deduction goes from the general to the particular, and induction goes from the particular to the general. Deduction is to induction what probability is to statistics.

Let’s start with induction. Domino effect is perhaps the most famous instance of induction. Inductive reasoning consists in constructing the axioms (hypotheses, theories) from the observation of supposed consequences of these axioms.Induction alone is not that useful: the induction of a model (a general knowledge) is interesting only if you can use it, i.e. if you can apply it to new situations, by going somehow from the general to the particular. This is what scientists do: observing natural phenomena, they postulate the laws of Nature. However, there is a problem with induction. It’s impossible to prove that an inductive statement is correct. At most can one empirically observe that the deductions that can be made from this statement are not in contradiction with experiments. But one can never be sure that no future observation will contradict the statement. Black Swam theory is the most famous illustration of this problem.

Deductive reasoning consists in combining logical statements (axioms, hypothesis, theorem) according to certain agreed upon rules in order to obtain new statements. This is how mathematicians prove theorems from axioms. Proving a theorem is nothing but combining a small set of axioms with certain rules. Of course, this does not mean proving a theorem is a simple task, but it could theoretically be automated.

A problem with deduction is exemplified by Gödel’s theorem, which states that for a rich enough set of axioms, one can produce statements that can be neither proved nor disproved.

Two other kinds of reasoning exist, abduction and analogy, and neither is frequently used in AI/ML, which may explain many of current AI/ML failures/problems.

Like deduction, abduction relies on knowledge expressed through general rules. Like deduction, it goes from the general to the particular, but it does in an unusual manner since it infers causes from consequences. So, from “A implies B” and “B”, A can be inferred. For example, most of a doctor’s work is inferring diseases from symptoms, which is what abduction is about. “I know the general rule which states that flu implies fever. I’m observing fever, so there must be flu.” However, abduction is not able to build new general rules: induction must have been involved at some point to state that “flu implies fever”.

Lastly, analogy goes from the particular to the particular. The most basic form of analogy is based on the assumption that similar situations have similar properties. More complex analogy-based learning schemes, involving several situations and recombinations can also be considered. Many lawyers use analogical reasoning to analyse new problems based on previous cases. Analogy completely bypasses the model construction: instead of going from the particular to the general, and then from to the general to the particular, it goes directly from the particular to the particular.

Let’s next check some of conspicuous failures in AI/ML (in 2016) and corresponding AI/ML methodology that, in my view, was responsible for failure:

Microsoft’s chatbot Tay utters racist, sexist, homophobic slurs (mimicking/analogising failure)

In an attempt to form relationships with younger customers, Microsoft launched an AI-powered chatbot called “” on Twitter in 2016. “Tay,” modelled around a teenage girl, morphed into a “Hitler-loving, feminist-bashing troll“—within just a day of her debut online. Microsoft yanked Tay off the social media platform and announced it planned to make “adjustments” to its algorithm.

AI-judged beauty contest was racist (deduction failure)

In “The First International Beauty Contest Judged by Artificial Intelligence,” a robot panel judged faces, based on “algorithms that can accurately evaluate the criteria linked to perception of human beauty and health.” But by failing to supply the AI/ML with a diverse training set, the contest winners were all white.

Chinese facial recognition study predicted convicts but shows bias (induction/abduction failure)

Researchers in China’s published a study entitled “Automated Inference on Criminality using Face Images.” They “fed the faces of 1,856 people (half of which were convicted violent criminals) into a computer and set about analysing them.” The researchers concluded that there were some discriminating structural features for predicting criminality, such as lip curvature, eye inner corner distance, and the so-called nose-mouth angle. Many in the field questioned the results and the report’s ethics underpinnings.

Concluding remarks

The above examples must not discourage companies to incorporate AI/ML into their processes and products. Most AI/ML failures seem to stem from band-aid, superfluous way of embracing AI/ML. A better and more sustainable approach to incorporating AI/ML would be to initiate a mix of projects generating both quick-wins and long-term transformational products/services/process. For quick-wins, a company might focus on changing internal employee touchpoints, using recent advances in speech, vision, and language understanding, etc.

For long-term projects, a company might go beyond local/point optimisation, to rethinking business lines, products/services, end-to-end processes, which is the area in which companies are likely to see the greatest impact. Take Google. Google’s initial focus was on incorporating ML into a few of their products (spam detection in Gmail, Google Translate, etc), but now the company is using machine learning to replace entire sets of systems. Further, to increase organisational learning, the company is dispersing ML experts across product groups and training thousands of software engineers, across all Google products, in basic machine learning.


The rise and demise of GM’s Saturn

leave a comment »

Not very long ago, on a cold, wintry day of January 1985 the top man at GM, Roger B. Smith, unveiled ‘Saturn’, the first new brand to come out of GM in almost seven decades. A stand-alone subsidiary of GM, Saturn had a promising birth and was touted as a ‘different’ type of a car. Having its own assembly plant, unique models and separate retailer network, Saturn operated independently from its parent company. It was a cut above the rest in using innovative technology and involving its employees in the decision making process. Conceived as a fighter brand to take on the Japanese brands, the small car of superior quality was the product of strong principles with a mission of being America’s panacea to Japan’s challenge. It reaffirmed the strength of American technology, ingenuity and productivity with the combination of advanced technology and latest approaches to management.

Though a revolutionary idea, Saturn wasn’t able to live up to the hype or the hopes of Roger Smith. The case of Saturn is definitely one for the books. Its marketing campaign fired up the public’s imagination and interest perfectly while the product was a miserable failure. Everything the company did was just another leaf out of the handbook of perfect PR. When the first lot of cars had a bad engine antifreeze, the company replaced the entire car instead of just the coolant much to the customer’s delight.

Besides clever marketing, Saturn’s biggest assets were its passionate employees and customer-centric approach which rewarded it with a quick victory. The victory was however short-lived as GM was reluctant to expand Saturn’s offerings for fear of cannibalization on the sales of its other divisions. For the existing models, Saturn’s engine had inferior motor mounts with the plastic dashboard panels giving it a cheap look and even the plastic-polymer doors, the so-called unique feature, failed to fit properly. Overall, the car neither had an identity nor a USP. To make things worse, Roger Smith was on a spending spree from throwing recall parties when vehicle problems were solved to hosting “homecoming” celebrations at plants. This saddled GM with high costs leading to increased doubts of Saturn’s survival among the leaders of GM.

Disaster struck further when Saturn’s sub-compact prices failed to cover the huge costs gobbled up by a dedicated plant with massive operating costs. The fact that the plant churned out cars that barely share any common parts with other GM brands did not seem to help at all. To top it all, at a time when buyers were snapping up minivans and SUVs, Saturn’s offerings were just limited to 3 small models for over a decade, thereby losing out on locking customers in. Just when GM was pondering over the decision of scrapping the car, the UAW visited one of Saturn’s production facility with its international contract, only to be rejected by the workers. As obvious as it seemed, the unique labor contract of the company was dissolved and GM had no choice but to part with the brand by dividing the production among other GM plants.

Automotive history has witnessed myriad failure stories of brands that were supposed to be world-class products but ended up biting the dust. One such underachiever brand was Vector which sprouted out of the aim of producing an American supercar but doomed due to cash flow issues, mismanagement and failing to keep up their insane promises. Sterling, Rover’s disguise into the American market, was another lost car of the 80s which most people haven’t even heard of. Their promise of delivering “Japanese reliability and refinement with traditional British luxury and class” couldn’t save them from continuous sales drop and combating competition from new Japan rivals. Few other epic automotive experimental failures which can be recalled in this scenario would include Chrysler’s TC by Maserati , Subaru SVX, Jaguar X-type, Lincoln blackwood, GMC Envoy XUV, Chevrolet SSR, Chrysler Crossfire and Dodge Durango Hybrid/Chrysler Aspen Hybrid. While some were design disasters, the others just couldn’t perform.

The automobile industry is governed by various factors which include the technology advancements of the time, economic conditions and fluctuations of consumer needs. The latest automotive chip on the block are the electric cars which are set to revolutionize the entire industry. LeEco, a Chinese electronics company is taking serious steps to target Tesla, what with it investing $1.08 billion on developing its debut electric car. Tesla was the name which paved the way for an electronic vehicle era. Whether LeSEE, LeEco’s concept sedan, can surpass Tesla’s performance and give them a run for their money is only something that time will tell. If successful, these electric cars could be the game changers of this century to usher in an electric future. If not, it will fade away and claim its place as a bittersweet memory on the list of flops that the industry has had.

Written by Hayk

April 8, 2017 at 4:18 am

Can technology fail humanity?

leave a comment »

Technology, a combination of two Greek words signifying ‘systematic treatment of art/craft/technique,’ is:

the collection of techniques, skills, methods and processes used in the production of goods or services or in the accomplishment of objectives..

Whether it was discovery of fire, building a shelter, invention of weapons – and in modern times – invention of Internet, microchips, etc., it has always been about inventing, discovering and using information, techniques and tools to induce or cause economic, scientific and social progress or improvement.

However, the progress that technology caused has neither been linear or impending or ubiquitous or even obvious. All Four Great Inventions of China happened before 12th century AD. But, on the other side, despite Hippocrates’ treatise (dating from 400 BC) that, contrary to the common ancient Greek belief that epilepsy was caused by offending moon goddess Selene, it had a cure in form of medicine and diet, 12th-14th century Christendom perceived epilepsy as the work of demons and evil spirits, and its cure was to pray to St. Valentine and other saints. And in many cases, progress of technology itself or its consequences have been a matter of pure chance or serendipity, whether it is penicillin, X-rays or 3M’s post-its.

So, ironic as it is, until recently, technology hasn’t been very systematic in its own progress, let alone its impact on society, economy and culture of nations. But it’s become a lot more systematic since the dawn of Information Age, last 60 or so years. Since microchips, computer networks and digital communication were invented (all in the US), the technology became more systematic in its own progress and it’s becoming more miniature, cheaperfaster and more ubiquitous than ever before in human history. Ubiquitous technology makes the world hyper-connected and digital. Whether it is our phones, thermostats, cars, washing machines, everything is becoming connected to everything.  It is thus no coincidence that California (Silicon Valley + Hollywood) has recently become the 6th largest economy in the world, thanks to its beaconing technological and creative progress embodied in last 60 or so years.

Trump era has begun in January 2017, and he already did more to damage any potential technological and scientific progress coming from the US than any of his predecessors. From trying to unreasonably curb immigration from Muslim countries to terminating TPP to undoing progress in transitioning to clean energy and again focusing on coal to disempowering OSTP, Trump wraps his decisions with firebrand rhetoric and well-thought out psychological biases (anchoring bias is his favourite) around one message: MAGA.  Hopes are turning to China as the next flag-bearer of technological progress.

Nowadays, even coffee-shops are hyper-connected, aiming to personalize our coffee-drinking experience. And thanks to its omnipresence and pervasiveness of Internet, wireless connections, telecommunications, etc., technology (smartphones, games, virtual worlds, 3D headsets, etc.) is becoming and end in itself. In countries and cities like Singapore, Hong Kong, New York, digital and smartphone addiction is already a societal problem causing unintended deaths, lack of maturity, loss of educational productivity, marriage breakups, to cite but a few. In Singapore, where according to recent research, Millennials spend an average of 3.4h/day on their smartphones, government is now putting in place policies and organizations to tackle this psychological addiction.

However, even Bernie Sanders knows that technology cannot and should not be an end in itself or an addiction. Could Internet and technologies fail? Could Internet and thinking linked to it spell the end of capitalism? Could it cause societies, cultures and nations to fail?

Technology has proven to fail itself and us when it became an end in itself.

Only when it stays true to its nature and acts as an enabler, a platform for human endeavors is when technology will succeed. It can even end poverty or other problems and issues human race is facing..


The 20 Worst Venture Capital Investments of All Time

leave a comment »

Continuing from the previous post on dotcom failures, below is the list of top 20 venture capital investment failures. Unsurprisingly, names such as, Webvan and appear in this list as well as among the biggest dotcom failures.

1. Amp’d Mobile: Amp’d Mobile takes the crown for money-burning, with $360 million that ended in bankruptcy. The company’s major problem was its customers’ ability to pay. While other mobile providers check for an ability to pay bills within 30 days, Amp’d let it go to 90 days and marketed to these risky customers. It has been reported that 80,000 of the company’s 175,000 customers were unable to pay their bills.

2. Procket: Networking company Procket was once one of the most highly valued telecom startups in the U.S. It had $272 million in venture-capital funding and a valuation of $1.55 billion but was ultimately sold to industry behemoth Cisco Systems Inc. for a disappointing $89 million.

3. Webvan: Webvan was a grocery-delivery business that served nine metropolitan areas. Once valued at $1.2 billion with plans to expand to 26 cities, the company went bankrupt in 2001. Despite millions in sales, the company’s demise was brought on by a money-burn that exceeded sales growth. Major purchases included $1 billion for warehouses, enterprise servers and more than 100 Aeron chairs. Additionally, it acquired HomeGrocer just a few months before going under. This fast expansion proved to be too much for Webvan. This company that once had about $800 million in venture capital ended up with $830 million in losses, with about $40 million on hand.

4. Caspian Networks: Caspian Networks, orgiginally founded as Packetcom Inc., had a number of ups and downs, including a washout in 2002; the company finally shut down in 2006. Caspian Networks fluctuated from more than $300 million in funding and 323 employees to less than 100 employees and closed doors.

5. This icon of the dot-com bubble died out in November of 2000, going from a listing in NASDAQ to liquidation in just nine short months. The site sold pet supplies and accessories online. Once backed with $50 million by Hummer Winblad Venture Partners, Bowman Capital, and Inc., had promise and even bought out competitor But in the end, its stock bottomed out at 19 cents per share. Remembered for its sock-puppet ads, the expense of its $1.2 million Super Bowl ad, as well as large infrastructure investments, proved to be too much.’s sock puppet lives on as the icon of BarNone Inc.

6. Optiva: Optiva, a nanotech company that laminated flat-screen TV sets, had to shut down after it failed to continue to raise funding. It initially raised and ran through $41.5 million in venture capital. The problem was that it took too long to release its product, which was obsolete by the time it came to market.

7.’s small-goods delivery service, while a recipient of around $250 million in investment, and popular with students and young professionals, ultimately met its end and liquidated in 2001. Its business model was criticized as unprofitable because it didn’t charge for deliveries.’s demise is profiled in the documentary film e-Dreams.

8. CueCat: This much-mocked pen-sized bar-code scanner was designed to make finding information about ads easier. Instead, Digital Convergence Corp., CueCat’s creator, burned through $185 million from investors like The Coca-Cola Co. and General Electric Co. The device simply failed to catch on, and it was plagued with security problems.

9. DeNovis Inc.: DeNovis software once attempted to change the medical-claims world but ended up shutting down instead. It raised $125 million in venture capital and had 110 employees. Unfortunately, that wasn’t enough, and this promising solution simply didn’t have the cash to hang on until the software could be launched.

10. PointCast Inc.: After tens of millions of dollars in venture capital and a $400 million buy offer, PointCast was finally sold for $7 million. It was originally touted as the next big thing, but failed to live up to its hype when its software and downloads irritated customers.

The remaining ten are here.

Loads of money poured in; results – catastrophic. With less capital available, startups and entrepreneurs must still carefully consider money sources. There is sometimes more headache and problems coming with money than one would anticipate or would like to have. As an unavoidable consequence, the current economic and financial crisis makes angel investors and venture capitalists more careful and vigilant in what they invest and pushes them to introduce tighter controls and additional transparency, having in mind the final objective of (an even more rapid) sell or IPO for a startup.

The list was compiled in 2007 and will certainly get new entrants by the end of this or the beginning of next year.