Analogies, Overfitting/Underfitting Problems, Generalist vs. Specialist Choice

SilviaZZZ
4 min readFeb 18, 2021

You may find Range: Why Generalists Triumph in a Specialized World assuring if you happen to have switched paths multiple times and struggling to find “the one thing” like me. However, being a jack of all trades will not automatically make you better at processing problems.

Some spoiler about the 333-page book before we segue into our topic: the book is barely about cognitive science or any serious biological studies. Quite some statements are nothing more than assumptions based on anecdotal data or flimsy reasoning, but I did get inspired throughout the book. Here are some words I find interesting regarding the help you could get from making analogies when solving problems:

“‘In the life we lead today,’ Gentner told me, ‘we need to be reminded of things that are only abstractly or relationally similar.’”

“What seemed like the single best analogy did not do well on its own. Using a full “reference class” of analogies — the pillar of the outside view — was immensely more accurate.”

The ability to find effective analogies comes from what we called “experience” and “knowledge”. So how to build an experience or skillset that can better help you leverage analogies? If we find the answer to this question, we may find the answer to the everlasting question of “generalist vs specialist”.

To find “analogies” to answer this question, I’d like to turn to statistical modeling and AI algorithms because they are probably the best externalization of how human brains process the world. Just like a human can be poor at providing a solution to a problem, a model can be poor at predicting. In the technical world, it’s called overfitting or underfitting. To also interpret the argument David Epstein provided in “Range”, the disadvantage that a specialist has is an overfitting problem. But simply being a jack of all trades won’t inoculate you against the overfitting problem, neither will it prevent you from another problem of underfitting.

Overfitting

According to wiki definition, overfitting in statistics is “the production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit additional data or predict future observations reliably”[wiki].

The first few months into my MBA, a question came to my mind “why people in such a top MBA program can be so irresponsible? aren’t we the school emphasizing in admission the past record of success the most?” By that time, I had stayed in one company for too long, and I was under the impression that bringing the best results at all costs (even your sleep deprivation) all the time is something that a good teammate SHOULD do (among many other things). So whenever a teammate delivered a piece of work that doesn’t meet my standard for “good” or just failed to deliver anything, I would categorize them into “unreliable people I don’t want to talk to”. And as you can imagine, very few classmates were classified as “reliable people” at the end of the two years.

Just like a model trained on Scottish Fold only will mistakenly classify a Sphynx as something definitely not-cat the first time it processes it. I didn’t realize it was overfitting until I got into my first full-time job in a startup and found more colleagues that could be categorized as “unreliable people”. They are just “normal people” like you and me. We were trained in different domains so we had different standards for different types of work; my written English may not meet their standard for “best results”. We had different priorities at work and in life; in many cultures, family, friends, personal health are among many things that come before work regardless of the age group the person is in. And even within the workspace, different people typically see different projects with different priorities based on their own experience. It’s absolutely unnecessary (also impossible most of the time) for each of the teammates to be 100% aligned on every single aspect to make an initiative work. They can be reliable as long as they are on the right thing and set the right goal.

Underfitting

Underfitting occurs when a statistical model cannot adequately fit the data, which also harms the accuracy of our model as overfitting does.

This also occurs after a few months into my startup job. After I noticed the above overfitting problem, I started to alleviate the overfitting problem by “reducing features” and expand my “training set” to incorporate “colleagues in startups”. However, I failed to discern “unqualified teammates” partly because my training set wasn’t labeled correctly partly because I haven’t got enough data on “qualified teammates” samples out of my previous company. So I mistakenly thought any “person with previous work experience” can be considered as a “qualified teammate” and I should just embrace them as who they are and find a way to work with them. Until later, I found out more of our real “qualified teammates” actually exist, some of the previous samples were actually mislabeled, and there’s a limit for “workarounds”.

However, after the process of adjusting overfitting and underfitting problems, I have a better understanding of the working space than I first started off. It’s a bit late in my 30s, but late better than never.

To summarize, to become better at processing information and solving problems, what we need to do is to continuously fine-tune our “training set” and our “model” though the feedback or results don’t always come as obviously and quickly as in a perfect experiment. As long as you could find the sweet spot between overfitting and underfitting through “optimization”, it doesn’t matter anymore whether you are on the path of a generalist or specialist.

--

--