Introduction
Introduction Statistics Contact Development Disclaimer Help
_______ __ _______
| | |.---.-..----.| |--..-----..----. | | |.-----..--.--.--..-----.
| || _ || __|| < | -__|| _| | || -__|| | | ||__ --|
|___|___||___._||____||__|__||_____||__| |__|____||_____||________||_____|
on Gopher (inofficial)
Visit Hacker News on the Web
COMMENT PAGE FOR:
Being too ambitious is a clever form of self-sabotage
avcix wrote 16 hours 5 min ago:
It's funny, I was aspiring to be a game developer when I was a child
and developed some games on Game Maker when I was like 12, and those
were the best games I made.
Then, I attended to University but never found my skill enough to
develop a game worthy of my tastes.
juggli wrote 18 hours 22 min ago:
Thank you for sharing, it's a fantastic article. When you feel like
giving up, that's when the real work begins.
CommenterPerson wrote 1 day ago:
Good article. Verified by years of observing my own and others'
failures and successes. Do-Learn is a great positive motto. Compared to
the nihilistic break things, or fail fast, etc.
Must say, it was a bit long. At the beginning, and after looking up the
author, I confess to thinking "Oh no another pretty face influencer".
But it built up very well. My respect level increased a lot when I saw
Olin College of Engineering on her bio. Had checked it out for my
daughter and came away very impressed by their approach. Most all
American engineering colleges are so full of theory and so little
doing, when it should be the other way around. Kudos.
georgeecollins wrote 1 day ago:
I love the term "taste-skill gap". I work with people who are good at
making movies and people who are good at making video games. There is
always this awkward thing that people who are good in one area are
convinced they would be great in the other. I don't think I have ever
met a film director that didn't think he would be a great video game
designer if he put the time into it and really good game designers
(well the narrative game ones) don't understand why they wouldn't be a
good choice to direct a movie.
Taste comes quicker and can be more generalized. It's also pretty easy
to express. Skill has many hidden components, takes experience to hone
and is typically very specific.
0xbadcafebee wrote 1 day ago:
"The gap" is probably what allowed humans to evolve complex solutions.
Rather than just bang things together, they had to think of how they
would do it before they even began. That seems to be the differentiator
between other animals and humans: we can shave a lot more yaks.
I have spent a year on a project that is not really much closer to
completion than when I started. But I have been shaving yaks like a
motherfucker. Research, design iterations, acquiring tools, making
jigs, creating space. (I have also wasted a lot of time due to coping
with ADHD and depression)
I could have done it sooner if I had compromised more. But I wasn't yet
experienced enough to know what compromises to make and still end up
with an acceptable solution. Many things have come up that I didn't
expect in my initial dream. If I'd known then what I know now, I would
have dialed things down.
Ignorance amplified my ambition, and my ambition exceeded my grasp. But
if you never give up, it's not sabotage: it's perseverance. And I
refuse to quit. My grasp is getting stronger. I'm moving forward
faster, getting better. So my ambition (in this case) is a stupid form
of self-improvement. It turns out I'm not building a camper. I'm
building Me.
czhu12 wrote 1 day ago:
I think this is why it helps a lot to build something you actually use.
Because then, the barometer for what is good becomes a lot more
defined: "Did I solve the problem I had", and then slowly build up from
there.
Instead of trying to imagine a thing that someone else might or might
not need.
I've been slowly chipping away at a heroku alternative called Canine
[1] for the better part of a year now on the side, and for once, I
don't feel tons of pressure or self loathing for not working on it
quickly enough.
I use it every day now, and whenever I come across something that I
wish was a little better (at the moment, understanding how much memory
is used by the cluster is a pet peeve), I ruminate on it for a few days
before hopping in and making some changes. No more, no less. It helps
me get away from "what is the perfect solution", to "can i fix this
thing that annoys me right now"
[1]: https://canine.sh
whilenot-dev wrote 1 day ago:
> "Did I solve the problem I had"
I really think that's the wrong question, but I don't know how to
formulate it any better... it should be somewhere between playful
curiosity ("how did it advance me a step in my own interests?"),
pragmatic foresight ("how did it open up new possibilities?"), and
bland reflection ("why was it the necessary thing to do at that
moment?").
> "can i fix this thing that annoys me right now"
Whatever your questions might be, I sure hope they won't only aim for
a boolean answer.
prairieroadent wrote 1 day ago:
my intuition on all this, and what both you seem to be getting at
is that is wildly difficult to understand the problems of others at
least to the degree that you understand your own problems, which
even then understanding one's own problems is hard... which leads
to the perspective of focusing on what they understand (their own
problems), and then as a side-effect of addressing their own
problem helping others by documenting their problem and attempted
solutions... documenting and broadcasting adds a little bit of
overhead but it seems like an environmentally-friendly approach to
participating in society in a healthy way
anontrot wrote 1 day ago:
so if i am content with my life, i can't create anything...
Herring wrote 1 day ago:
Nah you can, you just need to worry extra hard about finding
product-market-fit or you’ll have something 100% useless on your
hands.
lotyrin wrote 1 day ago:
Shaw's unreasonable man strikes again.
sonicvrooom wrote 1 day ago:
I don't get this at all.
WTF is "too ambitious"? When people *don't* want to make the only
necessary "sacrifice" aka exchange/trade off? It's usually time that
is otherwise spend on something else, which includes family, friends,
other hobbies but the latter can be taken off the list because implicit
to ambition is the higher priority of the thing or state aspired and
worked on.
The ability to recognize quality grows quicker because of the amount of
people who have successfully made the exchange and either improved
their skill or found and implemented acceptable workarounds.
Most post-modern creation is fractal remixing. It's just effort put
into time. The most untalented people can create superb stuff if they
just keep grinding adequate levels of skill and workarounds.
The beauty, IMO, is in accepting the process of others and to support,
motivate, inspire them, with anything one can provide. That will help
them grow both, skill and taste, which in turn augments your world and
raises your ambition.
Look at it this way: if you poison your neighbors you lower the quality
in your environment which lowers the quality of your personal IO, input
and output. You even lower the standards of the evaluation of your IO.
Both, those of others and your own. You basically keep yourself low,
and thus, your own creation. That applies to content, products, code
and any writing.
People are stuck in the old hierarchical ways of thinking. That's not
even annoying. Please hone your sense for quality. You don't owe that
to the old world and guard but it would prove their effort was not for
nothing.
Wilsoniumite wrote 1 day ago:
I've faced this problem for almost every task in my life, from the
creative stuff already mentioned too less obvious things, like
socializing (Seeing what you said wrong without knowing how you could
have said it better). Because of this, the only things I have been able
to bear "practicing" are ones that are outside of any public view. Ones
were my taste was nonexistent. Code is one of them. We don't see much
code (good or bad) in public, and so it's one of few areas where my
taste could only improve after I saw the failures in my own work after
I had produced it, rather than during.
zahlman wrote 1 day ago:
> The algorithmic machinery of attention has, of course, engineered
simple comparison. But it has also seemingly erased the process that
makes mastery possible. A time-lapse of someone creating a masterpiece
gets millions of views. A real-time video of someone struggling through
their hundredth mediocre attempt disappears into algorithmic obscurity.
Honestly, I have found that the most important reason something gets a
million views is because it got 999,999 views (so the algorithm likes
it more). Lots of popular content doesn't demonstrate that mastery at
all; it demonstrates a dumbed-down presentation of relatively little
actual content, while the really good stuff is something you only
stumble upon by random chance, buried in hundredth-mediocre-attempts.
> I see this in wannabe founders listening to podcasts on loop, wannabe
TikTokkers watching hours of videos as “research,”
... Which feeds right into that. It becomes too easy to mistake fluff
for content and convince yourself of the value of that research. I
think it's something specific to watching video content, too.
One of my own possibly-self-sabotaging ambitions is video rendering
software that I would then use to produce my own content. But then, on
top of the actual software, I would have to figure out how to actually
write the shorter-but-still-compelling scripts that I imagine to be
possible. And I would spend the whole time expecting my work to be
ignored and despairing over that anyway.
uncivilized wrote 1 day ago:
Rare Hackernews W. Thanks for posting. This is a really great read.
matthewsinclair wrote 1 day ago:
> “There is a moment, just before creation begins, when the work
exists in its most perfect form in your imagination.”
I think TS Eliot said this exact thing, but more poetically, in “The
Hollow Men” (1925):
> “Between the conception and the creation, falls the Shadow”
Which remains one of my all time fave pieces of writing. So much said
in so few words.
hamilyon2 wrote 1 day ago:
I'll be fired if I create my worst
aswegs8 wrote 1 day ago:
Well-written but not really anything new or groundbreaking. I think
most people are well aware of this kind of idealization/perfection that
prevents progress.
gcanyon wrote 1 day ago:
>The quantity group would be graded on volume: one hundred photos for
an A, ninety photos for a B, eighty photos for a C, and so on.
> The quality group only need to present one perfect photo.
> At semester's end, all the best photos came from the quantity group.
I think the more interesting experiment would be to give both groups
the same assignment in terms of volume, but tell the quality group they
had to submit N photos but designate one as their choice, to be graded
on the quality of it. I think this would be interesting because my
hypothesis is that people differ in what they consider "good" and the
quality group would end up indicating the "wrong" photo as their choice
nearly 100% of the time.
thenthenthen wrote 1 day ago:
Agree. This method is not sound nor pedagogically acceptable. This a
big issue in art education still today.
sweezyjeezy wrote 1 day ago:
100% - the quality group only had one chance to impress the teacher,
whereas quantity group had dozens. The conclusion drawn from this in
the text seems to be based on assumptions. We don't actually know
how many intermediate photographs the quality group took as well, and
without knowing that and also checking the quality of those, it's
hard to say anything useful.
gcanyon wrote 1 day ago:
> At semester's end, all the best photos came from the quantity group.
My parents once owned a photography studio. My stepfather often said
something like, "A great photographer doesn't only take great photos;
he takes many photos of various quality, and never shows anyone the bad
ones."
RaftPeople wrote 1 day ago:
I think the lesson at the end of that semester is a bit muddied. It
says the quantity group figured out a bunch of stuff due to multiple
photos being taken, but there are a couple things we don't know:
1-Just because the single photo group only submitted one photo, they
may have taken just as many as the quantity group
2-How were "best" photos determined (by prof? by class vote?)
If quality group took as many photos, then the issue is really about
the subjective selection of "best" photo. The first group had 100x
as many photos to choose from than the 2nd group, so it could be more
about how well each person in the 2nd group was able to select best
photo from their collection compared to however "best" photos were
selected out of all photos.
thenthenthen wrote 1 day ago:
This is not a very pedagogic method. Why divide the group?! Have
everyone go thru the same exercise seems more valuable to me. This
would get you fired today tbh.
satyrun wrote 1 day ago:
Exactly.
If you look at a book of Picasso's drawings/paintings he has
thousands of examples of half finished, complete shit.
The masterpieces are the result of picking the best output.
mrbluecoat wrote 1 day ago:
> your brain begins to treat planning as accomplishing
So project managers accomplish nothing?
scottgg wrote 1 day ago:
I enjoyed the writing here a lot - it’s a nice, clearly explained
idea. I can recognize myself in this.
mmsc wrote 1 day ago:
Even if some people are not ready for the day, it cannot always be
night.
luckystarr wrote 1 day ago:
I started my most ambitious project in February. A few years ago I
wouldn't have even dreamed of ever starting let alone finishing it, but
now I have a Claude Code Max Pro subscription and it goes forward in a
steady pace. I expect the first version of it to be finished within the
year. Even it's written mostly by AI, it's still a lot of work to get
it to do the right thing, but I'm getting better at it.
raphinou wrote 1 day ago:
I'm just starting a software project and, again, it is ambitious and
somewhat complex (multi-sig signoff solution).
I envy people that can identify a simple project and execute it
successfully.
But the challenge of building something more complex is what's
interesting for me. And I'm not sure I would have more success with a
simpler project, I'd get bored rapidly.
9dev wrote 1 day ago:
For me it helps to reflect on my desire to finish, which is mostly
just a fragment of my day job. On my side projects, I can go as deep
into the rabbit hole as I want, and enjoy the journey. Of course it
feels great to publish something eventually, but the zen garden
effect of just bike shedding to your hearts content is really
something you shouldn’t dismiss immediately.
moooo99 wrote 1 day ago:
> being too ambitious is a clever form of self-sabotage
Nice, I‘m clever!
moritzwarhier wrote 1 day ago:
Have to admit (I added to a snark comment about Substack and
productivity blogs when this was posted), it addresses a problem that
is plaguing myself.
Still not sure if it will help me overcome this, but the "quitting
point" concept and the drawing example made it a good read for me.
Not 100% the same, but I've also heard there is a correlation between
procrastrination and perfectionism, narcisissm (not only grandiosity,
also vulnerabity and low self-esteem): [1] Relevant proverbs are
plenty... "There is no failure except in no longer trying" etc
[1]: https://pmc.ncbi.nlm.nih.gov/articles/PMC11353843/#sec3-ijerph...
TrackerFF wrote 1 day ago:
The «taste-skill» thing is something you often see in music. Those
with great taste, but limited ability, tend to pursue roles like
promotion, agents, producing, etc.
xchip wrote 1 day ago:
And drinking to much water is also bad.
We already know that too much of anything is bad and that virtue is in
the middle.
Please stop writing articles like this.
wizzwizz4 wrote 1 day ago:
Because these articles aren't ambitious enough?
ezekiel68 wrote 1 day ago:
I read the title. Immediate reaction:
"Jeepers - they're on to me!"
baxtr wrote 1 day ago:
> At semester's end, all the best photos came from the quantity group.
The quantity group learned something that cannot be taught: that
excellence emerges from intimacy with imperfection, that mastery is
built through befriending failure, that the path to creating one
perfect thing runs directly through creating many imperfect things.
This reminded me of Roger Federer, who has won 82% of all matches but
only 54% of all points.
I really enjoyed this article and also believe that in many cases doing
is superior to planning.
Just a word of caution: the author doesn’t account for cost. All
examples given are relatively low-cost and high-frequency: drawing
pictures, taking photos, writing blog posts.
The cost-benefit ratio of simply doing changes when costs increase.
Quitting your high-paid job to finally start the startup you’ve been
dreaming of is high-cost and rather low-frequency.
I don’t want to discourage anyone from doing these things, but it’s
obvious to me that the cost/frequency aspect shouldn’t be neglected.
t_hozumi wrote 12 hours 8 min ago:
That's a really important point, and I completely agree.
This perspective reminds me of an excellent book I recently read, How
Big Things Get Done: The Surprising Factors That Determine the Fate
of Every Project, from Home Renovations to Space Exploration and
Everything In Between by Bent Flyvbjerg and Dan Gardner.
This book focuses on extremely high-cost "megaprojects" and
emphasizes the critical importance of thorough "planning" before
execution. This stands in stark contrast to the low-risk creative
activities discussed in the article, which makes the point about cost
even more compelling.
However, rather than being a complete counter-argument, I see a
significant overlap. The book advocates *for low-risk, low-cost
experimentation and creative exploration during the planning phase*
through methods like miniature prototyping and CAD simulations. In
this sense, both the article and the book highlight the value of
iterative approaches, whether it's through frequent, small-scale
actions or through meticulous, low-cost trials before committing to
high-cost endeavors.
Marsymars wrote 21 hours 13 min ago:
> This reminded me of Roger Federer, who has won 82% of all matches
but only 54% of all points.
This is in large part just a function of the way the rules of tennis
work. e.g. consider gambling games, where there are games where the
house only has a 1% edge, but if you play long enough, the casino
will get 100% of your money.
baxtr wrote 18 hours 40 min ago:
Yeah and that’s the point of the article I think. If you never
swing your can’t never hit.
And I added the notion that even if you miss almost half of your
swings you can still win big time.
d4rkn0d3z wrote 1 day ago:
Clearly, this author could sell ice cubes to penguins. What a splended
piece!
The word that kept coming to my mind as I read this was convergence.
eleveriven wrote 1 day ago:
I've definitely spent more time designing "the perfect system" than
using it. There's a seductive comfort in planning that real execution
just doesn't offer because actual work has feedback, friction, failure
ethan_smith wrote 1 day ago:
The "architecture astronaut" syndrome is particularly endemic in
software - we design elegant systems in our heads that would take 10x
the time to build than a simple solution that actually ships.
gizajob wrote 1 day ago:
I thought I wasn’t going to enjoy the article from the title, but it
turned out to be bang on the money with regards to creativity.
nilirl wrote 1 day ago:
Let's be honest, this is us taking a few terms from a few neuroscience
and cognitive psychology papers and running with it.
There are two claims in this post: Initial goals get adjusted as we
discover operating constraints, and it is easier to work with fewer
variables to pay attention to.
I didn't like these sentences in this post:
- "I see this in wannabe ..."
- "Here's what happens to those brave enough to actually begin ..."
Here the author was brave enough to put themselves on a pedestal; like
a true wannabe profound.
Simon_O_Rourke wrote 1 day ago:
There's the flip side to it too... I'm just waiting for an overly
ambitious non-technical colleague in what should be a technical
management role to overreach in terms of role and promotion.
eleveriven wrote 1 day ago:
Yeah, I've seen that dynamic play out. Sometimes it works out okay,
but when it doesn't…
joduplessis wrote 1 day ago:
If you consider any creative endeavor as a burden, I suggest you relook
at why you're doing it. You have to love the process (not just the
outcome), and in this case, that "gap" Ira Glass refers to usually acts
like fuel on the fire.
r33b33 wrote 1 day ago:
LLM written article.
gregjw wrote 1 day ago:
Scope creep.
thrwwXZTYE wrote 1 day ago:
This syndrome is called "eternal child" (puer aeternus) in psychology.
You were destined to great things. You were exceptional as a child, you
learnt to associate your great potential with all the good in yourself,
you built your identity around it. You were ahead of your peers in
elementary school, whatever you applied towards - you exceled at.
So you value that potential as the ultimate good, and any decision
which reduces it in favour of actually doing something - you fear and
avoid with all your soul. Any decision whatsoever murders part of that
infinite potential to deliver something subpar (at best - it's not even
guaranteed you achieve anything).
Over time this fear takes over and stunts your progress. You could be
great, you KNOW you have this talent, but somehow you very rarely tap
into it. You fall behind people you consider "mediocre" and "beneath
you". Because they seem to be able to do simple things like it's the
simplest thing in the world, while you somehow can't "motivate"
yourself to do the "simple boring things".
When circumstances are just right you are still capable of great work,
but more and more the circumstances are wrong, and you procrastinate
and fail. You don't understand why, you focus on the environment and
the things you fail to achieve. You search for the right productivity
hack or the exact right domain that will motivate you. But any domain
has boring repeative parts. Any decision is a chance to do sth OK in
exchange of infinite potential. It never seems like it's worth it, so
you don't do it.
You start doubting yourself. Maybe you're just an ordinary lazy person?
Being ordinary is the thing you fear the most. It's a complete negation
of your identity. You can be exceptional genius with problems, you take
that any time if the alternative is "just a normal guy".
mkaic wrote 1 day ago:
I feel a bit shaken after reading this comment, to be honest. I don't
think I've ever heard someone so perfectly describe such a major
component of my life experience. It's like you read my mind.
I was a "gifted kid", now I'm a lonely adult living by herself
constantly cycling between complacency, failure, panic, and
productivity. Diagnosed ADHD, choose to stay unmedicated, sometimes
the best employee in my office, usually one of the laziest and most
disappointing employees in my office. Constantly daydreaming about
how better circumstances would change things for the better even
while knowing deep down I'd cause the exact same set of problems for
myself all over again even if I got my Dream Job.
Spent my whole life being told I was exceptional, and, to be fair, I
lived up to it as a kid. These days I'm so terrified of regressing to
being "normal" that I sabotage myself at every turn.
Thank you for leaving this comment. I may bring up the concept with
my therapist and see what she thinks of it.
joewhale wrote 1 day ago:
wow. you put into words what i've been experiencing but never
understanding. thank you.
wordpad wrote 1 day ago:
Oh God
So, what is the lesson here?
Gotta let go of pride and risk it for the biscuit (ship something)?
mr_mitm wrote 1 day ago:
And what's the lesson for parents? Can it be counter productive to
praise your child a lot?
1auralynn wrote 1 day ago:
The answer here is actually teach them to self-evaluate, e.g.
What do you think about your drawing? Should we hang it up?
Got this from Steve Peters:
[1]: https://en.m.wikipedia.org/wiki/Steve_Peters_(psychiatri...
hengheng wrote 1 day ago:
Don't just reward your child for being smart.
Reward them for listening, integrating, being nice towards
others, relaxed, comfortable, flourishing, in their lane
ajuc wrote 1 day ago:
Praise the effort not the result. And especially don't make it
into identity of the child.
Best: good job studying for that exam.
Meh: good job passing that exam.
Worst: you so smart, everything comes easy to you.
thrwwXZTYE wrote 1 day ago:
There's no lesson. It's hard. Your brain will search for the silver
bullet to skip the boring self-improvement work and feel good NOW.
It'll likely detach your current self from your past self (I was
bad, I discovered this, now I'm great, exceptional and heroic
again). Then you'll again avoid the boring day-to-day work (becaue
you feel exceptional again) and fail again.
Everything you know is material for your brain to make excuses and
rationalizations. So no lessons work.
What works is retraining the part of the brain that distorts the
reality and directs all your thoughts towards these patterns.
It's a lot like debugging. There's a callback in your brain that is
harmful. It triggers every time you have to sacrifice some future
potential for uncertain reality. It is subconscious. Put a
breakpoint in that callback. Try to notice every time it triggers.
At first just notice it, notice what it urges you to do.
When you have it nailed down - try to change it. At that point
you'll realize the urge and where it comes from. Then it's a matter
to making the decision and committing to sth, no matter what. It
doesn't only have to be big things, it can be small things
unrelated to work. It's the same "code". If you do it every time -
you'll retrain it eventually.
At least that's the theory, I'm not there yet.
snarfy wrote 1 day ago:
Physical exercise does wonders for this. The results you achieve
are a 100% determined by the time and effort you put in. It's
hard to start, as its asking for more self-improvement, but if
you can get this one thing, the rest fall into place.
bn-l wrote 1 day ago:
> Put a breakpoint in that callback. Try to notice every time it
triggers. At first just notice it, notice what it urges you to
do.
Damn I love this advice phrased like this.
Joel_Mckay wrote 1 day ago:
I tend to find if it isn't ambitious enough, than it is just low
hanging fruit for competitors... Chances are someone already published
something similar.
The market usually doesn't want advanced technology, but rather the
comfortable nostalgic dysfunctional totems they always purchased in the
past. =)
"The Man In The White Suit" ( 1951)
[1]: https://archive.org/details/TheManInTheWhiteSuit1951_201810
bravesoul2 wrote 1 day ago:
Prioritisation! It's very hard. Deciding what to do and therefore what
not to do.
strogonoff wrote 1 day ago:
To be strategic, you think hard enough how to get somewhere and
carefully plan and eliminate unknowns until you reach a point when
getting there is no longer interesting.
Congratulations: you have successfully turned your cool idea into a
chore. It’s just a lot of trivial typing and package management and
it might not even be all that impressive when it is done.
Your idea is not at all a path well-trodden, but it is a path down
which you’ve sent a high-resolution camera FPV drone so many times
that you doubt you will see anything new in person.
What might happen then is that you try to keep it interesting by making
it more impressive and raising the bar, by continuing to think and plan
even harder. Why not write it in Rust? Why not make it infinitely
extensible? More diagrams, hundreds more of open tabs…
It can absolutely lead to cool ideas with strategic and well-defined
execution plans. Unfortunately, it is also difficult to break this loop
and actually implement without an external force or another mind giving
you some reframing.
eleveriven wrote 1 day ago:
Planning as a dopamine hit, turning creativity into project
management, then raising the complexity bar just to feel engaged
again. It's like chasing novelty within the sandbox instead of
stepping outside it
andoando wrote 1 day ago:
RIP the project Ive spent 5 years on. Spent more time doing thinking
than doing. Shifted goals higher and higher and never felt satisfied
with what I had done. And now at the supposed end even my perfect
goal seems completely uninteresting
astrobe_ wrote 1 day ago:
> It’s just a lot of trivial typing and package management and it
might not even be all that impressive when it is done.
> What might happen then is that you try to keep it interesting by
making it more impressive
This feeling is something that immediately sets off an alarm in my
head.
IRL every time I tried to impress someone, I said or did stupid
things. These experiences are now part of cringe memories about
myself.
In software, the paradox is often that making something simple is
difficult, but easily reproducible and unimpressive for most people.
It is kind of like the engineers' version of when people say that
their 4yo kid could do the same drawings as Picasso.
Just go through the last 90% and finish the thing. Like Antoine de
Saint-Exupéry said, perfection is reached not when there's nothing
else to add, but when there's nothing more to remove.
Then put the V1.0 tag on it and move it to maintenance mode. Then
move to the next project, which very well might be about covering a
different set of needs in the same area.
raynr wrote 1 day ago:
> Congratulations: you have successfully turned your cool idea into a
chore.
The article gave me a vague, off-topic sense of unease but your
comment crystallised the feeling for me.
I really wish less emphasis is placed on this kind of blue-sky,
"strategic" thinking, and more placed on the "chores". Legwork,
maintenance, step-by-step execution of a plan, issue tracking,
perspective shifting etc. are all, in my opinion, critically
important and much more deserving of praise and respect than
so-called "strategic" thinking.
Which, IME, most people can't do anyway! After they've talked their
big talk you suggest that there's a practical, on-ground problem and
they look at you accusingly, like you're sabotaging their picture.
And I'm like, no, my friend; reality is sabotaging your picture, it's
just the two of us here and you're not losing any face by me pointing
that out, and also if you were an actual strategic thinker you'd have
taken my on-ground problem into account already...
manapause wrote 1 day ago:
I’ve found the best strategies are the ones you can abandon.
clearly defined tactics and an appropriate application of people
and resources require a quarterback with an ability to audible.
It’s possible to make no mistakes and still lose, it’s when
people get offended about something they are wrong about that
creates a tolerance for Pyrrhic victories.
strogonoff wrote 20 hours 12 min ago:
> clearly defined tactics and an appropriate application of
people and resources require a quarterback with an ability to
audible
Can you rephrase?
strogonoff wrote 1 day ago:
This might come from childhood and problematic praise patterns. You
can grow to both crave praise and surprise, but at the same time
when you get it not really value it. You might be interested to do
impressive work as play when you don’t know how it will pan out,
but if you don’t feel like it is interesting enough then you are
demotivated.
I think it is important to be able to strategise, especially if you
can delegate parts of the work. If you cannot delegate, there needs
to be a balance with capacity for grunt work. One way to address it
perhaps is learning to get in the zone and enjoy ongoing work as a
process. Unfortunately, sometimes it is hard to snap out of big
picture view and get to it.
scuol wrote 1 day ago:
If this sounds like you, I highly recommend reading "The Problem of the
Puer Aeternus".
You can definitely skip a lot of the tedious bits where the author
essential copy-pastes other books for analysis, but this is a very
common pattern where people tend to hold themselves back because doing
the unambitious, rather pedestrian next step forward requires one to
face these preconceived notions about oneself, e.g. "I should've done
this long ago", etc.
ninetyninenine wrote 1 day ago:
>Creation is not birth; it is murder. The murder of the impossible in
service of the possible.
What a stupid quote. You know why it's stupid. Because murder is
creation. It is the creation of death while destroying life.
Just use the word the way it's meant to be used. Don't come up with
quotes that sound clever and trick the mind into thinking a statement
is profound when really it's just more word trickery.
jiriro wrote 11 hours 56 min ago:
> Just use the word the way it's meant to be used.
Ha ha, you are funny:-)
This is the whole point of a (natural) language – the meaning of
words is inevitably floating.
Do not nail down a meaning of a word, it’s impossible. Instead, try
to imagine there is no word;-)
the_af wrote 1 day ago:
I thought the phrase was a whimsical/poetic way of saying something
that rings true to me: that all the possibilities in your mind get
narrowed down to a single imperfect one when actually
materializing/putting them into practice -- in a way getting
"destroyed" and replaced with an imperfect but existing version --
and that we sometimes get anxious about this.
It's not the only way of looking at it, but it is one way, and it's
not wrong.
fcatalan wrote 1 day ago:
This resonates a lot with me. In fact it's a trait that has made me
unhappy for as long as I can remember.
I'm seeing a therapist later this month because in a talk with my GP
she saw strong enough hints of ADHD to send me there, and the kind of
situations and some feelings talked about in the article came up a lot
in the conversation.
I size up my oil paints against the old masters, not the old ladies in
the atelier. I paint miniatures way better than average but hang around
with Golden Demon winners so I always find myself wanting. Can play
beautiful Renaissance pieces on my uke, but infuriatingly not at a
professional performance level. Can win many sim races, but not against
the top 0.1%, yet I size myself against their telemetry and laptimes. I
dabble in Chess but being forever stuck around lowly 1300 ELO makes me
feel dumb. My dead side projects cemetery has subdirectories
approaching 3 figures. I go out and cycle with my brother but I huff
and puff while he tops the Strava segments and wins the regional
amateur championship again.
So too many days I just sit and do nothing, or just look for something
else to enjoy for a few months until I become an unhappy promising
beginner at yet another thing, adding to the overall problem.
tayo42 wrote 1 day ago:
Why would not want to compare your self with the best?
Just don't drive your self crazy over it?
willguest wrote 1 day ago:
To have such capacity and drive, as well as critical self-reflection
is a rare thing. I would first suggest some appreciation for the
interesting and curious state of being that you seem to have
developed. Nicely done!
My own route out of this trap was to explore theories of mind and,
more profoundly, practices of no-mind. Doing nothing is much harder
to achieve than doing something and can create a space for insight
that the analytical mind cannot access. From this place, which is
free of comparison and judgement, incredibly beautiful things can
emerge.
If you would like to get to the root of it, I would suggest Taoist
teachings and reading a few things by Krishnamurti. To understand
the fundamental limitations of the mind can tell you something about
who you through negation. For me, this has brought a deep sense of
peace as well as an ability to use my mind in a more satisfying way.
Just my two cents :)
whatevertrevor wrote 1 day ago:
I don't want to psychoanalyze but it seems your sense of
dissatisfaction is a little different from what the author is
describing? Your dissatisfaction is from not accomplishing the
possibly implausible goal of being the very best at something without
being a professional competitor, while the author is describing a
case of not even getting started on creative projects out of a fear
of them not living up to a made up standard in your mind.
They're both arguably unreasonable standards but one is for the
end-product (i.e. a novel/album/software project) as opposed to
reaching some apparent level of general skill at your hobby. The
latter is full of traps because for subjective hobbies like arts, how
does one even evaluate that?
kretaceous wrote 1 day ago:
The first two sections reminded me of an observation I've made about
myself: the more I delay "doing the thing" and spend time "researching"
or "developing taste", the more I turn into a critic instead of a
creator.
> Your taste develops faster than your skill
> "the quality group could tell you why a photograph was excellent"
They are critics now. People with a huge taste-skill gap are basically
critics — first towards themselves and gradually towards others. I
don't want to generalize by saying "critics are just failed creators",
but I've certainly found it true for myself. Trying to undo this change
in me and this article kind of said all the words I wanted to hear. :)
It's both dense and beautifully written. Feels like every paragraph has
something profound to say. This kind of
"optimizing-for-screenshot-shares" writing usually gets overdone, but
since this actually had substance, it was amazing to read.
(See how I turned into a critic?)
al_borland wrote 1 day ago:
For those who haven’t run across it, I like the man in the arena
speech from Theodore Roosevelt to put things in perspective when I
turn into a critic, or get harsh feedback from a critic.
“It is not the critic who counts; not the man who points out how
the strong man stumbles, or where the doer of deeds could have done
them better. The credit belongs to the man who is actually in the
arena, whose face is marred by dust and sweat and blood; who strives
valiantly; who errs, who comes short again and again, because there
is no effort without error and shortcoming; but who does actually
strive to do the deeds; who knows great enthusiasms, the great
devotions; who spends himself in a worthy cause; who at the best
knows in the end the triumph of high achievement, and who at the
worst, if he fails, at least fails while daring greatly, so that his
place shall never be with those cold and timid souls who neither know
victory nor defeat.”
satvikpendem wrote 1 day ago:
There is a great comics site that illustrates such quotes:
[1]: https://www.zenpencils.com/comic/theodore-roosevelt-the-ma...
dismalaf wrote 2 days ago:
I hate the title but actually a pretty decent article.
> We are still the only species cursed with visions of what could be.
But perhaps that's humanity's most beautiful accident. To be haunted by
possibilities we cannot yet reach, to be driven by dreams that exceed
our current grasp. The curse and the gift are the same thing: we see
further than we can walk, dream bigger than we can build, imagine more
than we can create.
> And so we make imperfect things in service of perfect visions. We
write rough drafts toward masterpieces we may never achieve. We build
prototypes of futures we can barely envision. We close the gap between
imagination and reality one flawed attempt at a time.
meander_water wrote 2 days ago:
> the "taste-skill discrepancy." Your taste (your ability to recognize
quality) develops faster than your skill (your ability to produce it).
This creates what Ira Glass famously called "the gap," but I think of
it as the thing that separates creators from consumers.
This resonated quite strongly with me. It puts into words something
that I've been feeling when working with AI. If you're new to something
and using AI for it, it automatically boosts the floor of your taste,
but not your skill. And you end up never slowing down to make mistakes
and learn, because you can just do it without friction.
nickelpro wrote 1 day ago:
There's no meaningful taste-skill gap in programming because
programming doesn't involve tacit skills. If you know what you're
supposed to do, it is trivial to type that into a keyboard.
The taste-skill gap emerges when you intellectually recognize what a
quality creation would be, but are physically unable to produce that
creation, and judge the creations you are physically capable of
producing as low quality
The oft cited example is drawing a circle. Everyone knows what a
perfectly round circle looks like, but drawing one takes practice.
It doesn't take practice to type code. If you know what code you're
supposed to write, you write it. The problem is all in the taste
step, to know what code to write in the first place.
mjr00 wrote 1 day ago:
> There's no meaningful taste-skill gap in programming because
programming doesn't involve tacit skills. If you know what you're
supposed to do, it is trivial to type that into a keyboard.
Strongly disagree here. The taste-skill gap still applies even when
there's no mechanical skill involved. A lot of amateur music
production is entirely "in the box" and the taste-skill gap very
much exists, even though it's trivial to e.g. click a button to
change a compressor's settings.
In programming, or more broadly application development, this
manifests as crappy user interfaces or crappy APIs. Some developers
may not notice or care, sure, but for many the feeling is, "this
doesn't seem right, but I'm not exactly sure what's wrong or how to
fix it." And that feeling is the taste-skill gap.
nickelpro wrote 12 hours 12 min ago:
If you know what sound you want to hear, but don't know the
compressor settings to make that sound, that is a taste-skill
gap.
If you don't know what sound you want to hear at all, that's
undeveloped taste.
If you know what code you want to type, but don't know how to use
a keyboard, that would be a taste-skill gap.
If you don't know what code you want to type at all, that's
undeveloped taste.
mitjam wrote 1 day ago:
Yes and for me vibe coding / agent assisted coding is not just
pouring canned skills but about developing skills to handle this
new machine in a way to produce intended results.
purplesyringa wrote 1 day ago:
That's absolutely not the case. I can look at code and realize that
it's garbage because the architecture sucks, performance
degradation is out of the window, and there's lots of special
casing and unhandled edge cases. That's the taste part. But I can
also absolutely be underqualified and be unable to figure out how
to improve the architecture, fix performance issues, or simplify
special/edge case handling.
nickelpro wrote 12 hours 15 min ago:
Then your taste hasn't developed. You don't know what good code
for the problem even looks like. It's not that your code doesn't
resemble what you wanted to make, you don't know what you want to
make at all.
simianwords wrote 1 day ago:
This is not what Ira Glass meant by taste gap. What he rather means
is that taste is important. It’s what gets you into the field and
what makes you stick around. Happy to be corrected on this.
michaelbrave wrote 1 day ago:
yes that was the gist of Ira Glass's quote, but he also added to it
that it makes you feel frustrated when you have taste but are not
creating things that live up to that taste, but that as a young
artist you should push through that.
Here is a copy paste of the quote:
“Nobody tells this to people who are beginners, I wish someone
told me. All of us who do creative work, we get into it because we
have good taste. But there is this gap. For the first couple years
you make stuff, it’s just not that good. It’s trying to be
good, it has potential, but it’s not. But your taste, the thing
that got you into the game, is still killer. And your taste is why
your work disappoints you. A lot of people never get past this
phase, they quit. Most people I know who do interesting, creative
work went through years of this. We know our work doesn’t have
this special thing that we want it to have. We all go through this.
And if you are just starting out or you are still in this phase,
you gotta know its normal and the most important thing you can do
is do a lot of work. Put yourself on a deadline so that every week
you will finish one story. It is only by going through a volume of
work that you will close that gap, and your work will be as good as
your ambitions. And I took longer to figure out how to do this than
anyone I’ve ever met. It’s gonna take awhile. It’s normal to
take awhile. You’ve just gotta fight your way through.”
― Ira Glass
benreesman wrote 1 day ago:
I don't know much about Ira Glass and I'm not going to be a 5 minute
wikipedia expert about it, so maybe I'm missing out on very relevant
philosophy (I hope someone links the must read thing), but those
would be very intentionally inverted meanings of the taste/skill
dichotomy.
LLMs are good at things with a lot of quantity in the training set,
you can signal boost stuff, but its not perfect (and its non-obvious
that you want rare/special/advanced stuff to be the sweet spot as a
vendor, that's a small part of your TAM by construction).
This has all kinds of interesting tells, for example Claude is better
at Bazel than Gemini is, which is kind of extreme given Google has
infinite perfect Bazel and Anthropic has open source (really bad)
Bazel, so you know Gemini hasn't gotten the google4 pipeline
decontamination thing dialed in.
All else equal you expect a homogenizing effect where over time
everything is like NextJS, Golang, and Docker.
There are outlier events, like how Claude got trained on nixpkgs in a
serious way recently, but idk, maybe they want to get into defense or
something.
Skill is very rarely the problem for computers, if you're considering
it as district from taste (sometimes you call them both together just
skill).
theshrike79 wrote 1 day ago:
This is Rick Rubin pretty much. He has 100/100 in taste, but almost
0/100 in skill.
He can't really play an instrument, but he knows exactly what works
and what doesn't and can articulate it.
vikramkr wrote 1 day ago:
P vs NP
alistairSH wrote 1 day ago:
That’s an odd take for a massively successful person. In the
realm of producing hip-hop, his taste and skill are at the top of
the industry.
Sort of like saying Bill Belichick has a skill gap because he’s
not a top NFL player. AFAIK he never played pro ball at all (and
college wasn’t at a top D1 program). Bit, he’s undeniably one
of the most successful coaches in the business.
mitjam wrote 1 day ago:
he also said he started always with anxiety, was pushing, working
not in a comfort zone. For me this Looks very much like „do,
learn“. Another Rick Rubim quote: Humanity breeds in the
mistakes.
[1]: https://www.youtube.com/watch?v=brPHcAJn7ZU
mnky9800n wrote 1 day ago:
I think by skill they mean that Rick Rubin plays no instruments
and actively acknowledges this. In interviews he repeatedly
claims his only skill is knowing what sounds good and will make
money.
satyrun wrote 1 day ago:
Rubin was also in the right place at the right time.
Putting out Run-DMC – Raising Hell, Slayer – Reign in Blood
and Beastie Boys – Licensed to Ill in the same year is
completely insane but things are probably much different if he is
20 years older or 20 years younger. '
He was in the perfect place as hip hop and metal were taking off.
abenga wrote 1 day ago:
Rick Rubin said this in a popular interview himself, fwiw.
dgfitz wrote 1 day ago:
As an aside, beleichick was a lacrosse player as a
hobby/sport/passion, not an American football player. I’m very
torn at the moment if he was an incredible coach or just rode the
wave or Brady talent.
I pay a lot of attention to football as a hobby (and a gambling
outlet) so these next two seasons at UNC for ‘ol Bill will be
really telling.
BoxFour wrote 1 day ago:
> I’m very torn at the moment if he was an incredible coach
or just rode the wave or Brady talent.
Honestly, it’s hard to imagine they’d have been anywhere
near that successful if the answer wasn't just "both."
You see plenty of examples of great coaches stuck with lousy
rosters (Parcells with the Cowboys), and also great players on
poorly run teams (Patricia-era Lions). Usually when a team only
has one or the other, they continually flame out early in the
playoffs.
> these next two seasons at UNC for ‘ol Bill will be really
telling.
I wouldn’t read too much into that. He’s 73, the game’s
evolved a lot, and coaching college is a whole different thing
from the NFL. It’s incredibly rare for someone to excel at
both — guys like Pete Carroll being the exception that prove
the rule.
satyrun wrote 1 day ago:
Exactly. It is such a stupid debate when Belichick coached
and molded Brady into what he became.
Everyone has always said Belichick is basically an
encyclopedia of football knowledge.
dgfitz wrote 1 day ago:
That’s my whole point. Brady went on to win a ring in
Tampa. Bill did… what?
I don’t give belicheck the credit for teaching Brady, you
can’t teach that. It’s not stupid at all if you’re a
fan of the sport.
cheschire wrote 1 day ago:
You’re saying the same thing as GP. Let me attempt to clarify.
What GP is saying is not that Rick Rubin has no skill anywhere,
but that he recognized he has 100/100 taste and instead of trying
to become a hip hop artist, instead became a producer for other
artists.
In the same way, you’ve described how Bill Belichick recognized
his taste in what makes a player good is not enough to make him
also a good player, so he positioned himself to take advantage of
his 100/100 taste rather than whatever skill value he may have.
dasil003 wrote 1 day ago:
It’s weird to frame Belicheck as a talent picker first. Yes,
he had a lot of control but he was a coach first, not a GM.
The thing that made him extraordinary was not identifying
talent it was orchestrating a team system to take advantage of
individual talents. Compared to other coaches that had one
system and tried to fit players rigidly into it, Belicheck was
master of adapting the system to the personnel. Of course he
also had Brady and a lot of control on personnel, but it’s
ridiculous to speak as if it was primarily his taste that made
the Patriots great.
missinglugnut wrote 1 day ago:
Being able to articulate taste is a skill in and of itself.
worldsayshi wrote 1 day ago:
Another important skill in this area, or maybe it's a personality
trait: Being able to tell yourself that taste is actually really
important. You have to kind of double down on following ideas to
their extreme, or something like that. Or maybe taking very
subtle emotions very seriously.
Most of the time when you chase taste you are working on
splitting hairs. Or it will look like that to an outside
observer.
dpritchett wrote 1 day ago:
An uncomfortable thing about skill, taste, and experience is
that it’s often easier to demonstrate the superiority of one
path over another than it is to explain the differences in a
way the audience is prepared to absorb.
I imagine this is a large part of why tooling and language wars
are still compelling throughout decades of computing. No amount
of lecturing on the joy of e.g. Rails vs. Node will really
convince anyone to use an “outdated”, slow, dynamically
typed language like Ruby in 2025 — even in places where
it’d be a major win.
furyofantares wrote 1 day ago:
I'm confused. I often say of every genAI I've seen of all types that
it is totally lacking in taste and only has skill. And it drastically
raises your skill floor immediately, perhaps all the way up to your
taste, closing the gap.
Maybe that actually is what you were saying? But I'm confused because
you used the opposite words.
furyofantares wrote 1 day ago:
After sleeping on it and reading some replies I think I worked out
what they were saying. Take drawing - your skill at producing an
image is raised to a professional aesthetic (what I was saying) but
your skill at drawing is unchanged (what they are saying).
But they're saying your taste, in the context of self-judgment at
attempting to learn to draw, might also be raised to a professional
aesthetic, because you can already produce images of that level by
typing words.
I guess I will add that a difference here is we are talking about
taste somewhat differently. To me, genai has been a demonstration
that taste and skill are not two points on the same dimension.
debugnik wrote 1 day ago:
Closing the gap? I think we're inverting the gap: Many people now
have access to a higher skill level than they've developed taste
for (if they ever did), which makes them unable to judge their own
slop.
dmbche wrote 1 day ago:
Might be unrelated, but I feel like the "boost" that everyone is
talking about is cause by translating one medium into text, which
most people are more capable at than the medium they are trying
to produce in.
While it lets you create something you previously can't, the
qualities of the medium are replaced with those of languguage.
I.e. to produce visual images you don't need an understanding of
conrtast, composition, tranparency, chroma and all that, you just
need to be able to articulate what you want.
I think that's where the lack of taste appears, you have a
text-based interaction with a non-language medium.
How when a movie tries to keep as close as possible to a book it
rarely will be a noteworthy movie, versus something built from
the ground up in that medium
ItsHarper wrote 1 day ago:
Yeah this type of gap is going to become a huge problem the way
things are going
conartist6 wrote 1 day ago:
The gap will open itself back up again. If you can do anything in
10 seconds with a GenAI, it won't be long until 1,000,000 people
have all done it and it's considered poor taste...
phi-go wrote 1 day ago:
To me the argument also only makes sense as you understood it.
chatmasta wrote 1 day ago:
This is exactly why I’m wary of ever attempting a developer-focused
startup ever again.
What’s not mentioned is the utter frustration when you can see your
own output is not up to your own expectations, but you can’t
execute on any plan to resolve that discrepancy.
“I know what developers want, so I can build it for them” is a
death knell proportionate to your own standards…
The most profitable business I built was something I hacked together
in two weeks during college holiday break, when I barely knew how to
code. There was no source control (I was googling “what is
GitHub” at the time), it was my first time writing Python, I stored
passwords in plaintext… but within a year it was generating $20k a
month in revenue. It did eventually collapse under its own weight
from technical debt, bugs and support cost… and I wasn’t equipped
to solve those problems.
But meanwhile, as the years went on and I actually learned about
quality, I lost the ability to ship because I gained the ability to
recognize when it wasn’t ready… it’s not quite
“perfectionism,” but it’s borne of the same pathology, of
letting perfect be the enemy of good.
Kheyas wrote 1 day ago:
Do you need to ship?
webdevver wrote 1 day ago:
what was the $20k/mo business?
chatmasta wrote 1 day ago:
Selling proxies for scraping… this was circa 2011.
ido wrote 1 day ago:
a developer-focused startup
I'm sorry to tell you it doesn't just apply to developer-focused
startups!
ludicrousdispla wrote 1 day ago:
Within every startup, there is a developer-focused startup that
is trying to get out. I suppose that is because it is easier for
people to think about problems that affect them directly.
Or maybe it's the only way in which companies these days give
software developers agency.
gsf_emergency_2 wrote 1 day ago:
>letting perfect be the enemy of good.
My attempt to improve the cliche:
Let skill be the enemy of taste
2 issues here. Neither can be developed (perfected?) in isolation,
but they certainly ramp up at different rates. They should probably
feed back into each other somehow, whether adversarially or not
whatevertrevor wrote 1 day ago:
The issue as the article points out is you can grow taste much
much faster by only engaging in consumption, which leaves skill
in the dirt.
gsf_emergency_2 wrote 1 day ago:
I've heard that one way to pace is to... only consume your own
stuff (aka dogfooding :)
More grown-up way to do it is to consume your mates' stuff?
(Trying to go from where TFA left off)
the_af wrote 1 day ago:
I think you must consume (I hate that word, but let's go with
it) elsewhere. Someone said (maybe Stephen King in "On
Writing"?) in order to be a writer you must be voracious
reader, and there's no escaping this. It rings true to me.
Of course the problem of taste growing much faster than skill
remains, but I don't think the answer is to "consume" (yuck)
less. I actually don't know if there's an answer.
jpc0 wrote 9 hours 12 min ago:
Something important is “consuming” critically.
You can be a passive consumer and never improve your taste
or skill. However when you consume with the intent of
asking how and then attempting to answer that question (
for skill ) and why ( for taste ) you get a much different
experience.
Read code, looking for patterns. Look at design looking for
patterns.
Then play, try to implement what you saw, implement to
opposite and see how if feels, see what happens to the
code.
This is a lot of work, but helps you improve.
gsf_emergency_2 wrote 1 hour 12 min ago:
You've just suggested to me following optimization:
Prioritize "consuming" your frenemies' stuff
Because one always has to pay full attention when doing
that :)
gsf_emergency_2 wrote 19 hours 51 min ago:
>yuck
Some alts to choose from:
"use","utilize","imbibe","process","assimilate","experience
"
zambal wrote 1 day ago:
I don't think it's actually a problem. Taste can guide the
direction skill needs to go.
milkey_mouse wrote 1 day ago:
If anything it's the opposite, except maybe at the very low end: AI
boosts implementation skill (at least by increasing speed), but not
{research, coding, writing} taste. Hence slop of all sorts.
Loughla wrote 2 days ago:
This is the disconnect between proponents and detractors of AI.
Detractors say it's the process and learning that builds depth.
Proponents say it doesn't matter because the tool exists and will
always exist.
It's interesting seeing people argue about AI, because they're
plainly not speaking about the same issue and simply talking past
each other.
jibal wrote 1 day ago:
This is a radical misrepresentation of the dispute.
Loughla wrote 1 day ago:
I disagree with you. Please expand.
skydhash wrote 1 day ago:
Not GP, but I agree with him and I will expand.
The fact isn't that we don't know how to use AI. We've done so
and the result can be very good sometimes (mostly because we
know what's good and not). What's pushing us away from it is
its unreliability. Our job is to automate some workflow (the
business's and some of our owns') so that people can focus on
the important matters and have the relevant information to make
decisions.
The defect of LLM is that you have to monitor its whole output.
It's like driving a car where the steering wheel loosely
connected to the front wheels and the position for straight
ahead varies all the time. Or in the case of agents, it's like
sleeping in a plane and finding yourself in Russia instead of
Chile. If you care about quality, the cognitive load is a lot.
If you only care about moving forward (even if the path made is
a circle or the direction is wrong), then I guess it's OK.
So we go for standard solutions where fixed problems stays
fixed and the amount of issues is a downward slope (in a well
managed codebase), not an oscillating wave that is centered
around some positive value.
Loughla wrote 1 day ago:
I understand that but I'm not sure how it's a response to my
original statement.
ninetyninenine wrote 1 day ago:
>It's interesting seeing people argue about AI, because they're
plainly not speaking about the same issue and simply talking past
each other.
There's actually some ground truth facts about AI many people are
not knowledgeable about.
Many people believe we understand in totality how LLMs work. The
absolute truth of this is that we overall we do NOT understand how
LLMs work AT all.
The mistaken belief that we understand LLMs is the driver behind
most of the arguments. People think we understand LLMs and that we
Understand that the output of LLMs is just stochastic parroting,
when the truth is We Do Not understand Why or How an LLM produced a
specific response for a specific prompt.
Whether the process of an LLM producing a response resembles
anything close to sentience or consciousness, we actually do not
know because we aren't even sure about the definitions of those
words, Nor do we understand how an LLM works.
This erroneous belief is so pervasive amongst people that I'm
positive I'll get extremely confident responses declaring me wrong.
These debates are not the result of people talking past each other.
It's because a large segment of people on HN literally are
Misinformed about LLMs.
exceptione wrote 1 day ago:
> we do NOT understand how LLMs work AT all.
> We Do Not understand Why or How an LLM produced a specific
response for a
> specific prompt.
You mean the system is not deterministic? How the system works
should be quite clear. I think the uncertainty is more about the
premise if billions of tokens and their weights relative to each
other is enough to reach intelligence. These debates are older
than LLM's. In 'old' AI we were looking at (limited) autonomous
agents that had the capability to participate in an environment
and exchange knowledge about the world with each other. The next
step for LLM's would be to update their own weights. That would
be too costly in terms of money and time yet. What we do know is
that for something to be seen as intelligent it cannot live in a
jar. I consider the current crop as shared 8-bit computers, while
each of us need one with terabytes of RAM.
ninetyninenine wrote 1 day ago:
[1] For context, George Hinton is basically the Father of AI.
He's responsible for the current resurgence of machine learning
and utilizing GPUs for ML.
The video puts it plainly. You can get pedantic and try to
build scaffolding around your old opinion in attempt to fit it
into a different paradigm but that's just self justification
and an attempt to avoid realizing or admitting that you held a
strong belief that was utterly incorrect. The overall point is:
We have never understood how LLMs work.
That's really all that needs to be said here.
[1]: https://www.youtube.com/watch?v=qrvK_KuIeJk&t=284s
whatevertrevor wrote 1 day ago:
I couldn't agree more, and not just on HN but the world at large.
For the general populace including many tech people who are not
ML researchers, understanding how convolutional neural nets work
is already tricky enough. For non tech people, I'd hazard a guess
that LLM/ generative AI is complexity-indistinguishable from "The
YouTube/Tiktok Algorithm".
And this lack of understanding, and in many cases lack of
conscious acknowledgement of the lack of understanding has made
many "debates" sound almost like theocratic arguments. Very
little interest in grounding positions against facts, yet
strongly held opinions.
Some are convinced we're going to get AGI in a couple years,
others think it's just a glorified text generator that cannot
produce new content. And worse there's seemingly little that
changes their mind on it.
And there are self contradictory positions held too. Just as an
example: I've heard people express AI produced stuff to not
qualify as art (philosophically and in terms of output quality)
but at the same express deep concern how tech companies will
replace artists...
the_af wrote 1 day ago:
> Just as an example: I've heard people express AI produced
stuff to not qualify as art (philosophically and in terms of
output quality) but at the same express deep concern how tech
companies will replace artists...
I don't think this is self contradictory at all.
One may have beliefs about the meaning of human produced art
and how it cannot -- and shouldn't -- be replaced by AI, and at
the same time believe that companies will cut costs and replace
artists with AI, regardless of any philosophical debates. As an
example, studio execs and producers are already leveraging AI
as a tool to put movie industry professionals (writers, and
possibly actors in the future) "in their place"; it's a power
move for them, for example against strikes.
whatevertrevor wrote 22 hours 50 min ago:
Yeah, I know that's the theory, but if AI generated art is
slop then it follows that it can't actually replace quality
art.
I don't think people will suddenly accept worse standards for
art, and anyone producing high quality work will have a
significant advantage.
And now if your argument is that the average consumer can't
tell the difference, then well for mass production does the
difference actually matter?
the_af wrote 21 hours 48 min ago:
Well, my main argument is that it's replacing humans, not
that the quality is necessarily worse for mass produced
slop.
Let's be cynical for a moment. A lot of Hollywood (and
adjacent) movies are effectively slop. I mean, take almost
all blockbusters, almost 99% action/scifi/superhero
movies... they are slop. I'm not saying you cannot like
them, but there's no denying they are slop. If you take
offense at this proposition, just pretend it's not about
any particular movie you adore, it's about the rest -- I'm
not here to argue the merits of individual movies.
(As an aside, the same can be said about a lot of fantasy
literature, Young Adult fiction, etc. It's by the numbers
slop, maybe done with good intentions but slop
nonetheless).
Superhero movie scripts could right now be written by AI,
maybe with some curation by a human reviewer/script doctor.
But... as long as we accept these movies still exist, do we
want to cut most humans out of the loop? These movies
employ tons of people (I mean, just look at the credits),
people with maybe high aspirations to which this is a job,
an opportunity to hone their craft, earn their paychecks,
and maybe eventually do something better. And these movies
take a lot of hard, passionate work to make.
You bet your ass studios are going to either get rid of all
these people or use AI to push their paychecks lower, or
replace them if they protest unhealthy working conditions
or whatever. Studio execs are on record admitting to this.
And does it matter? After all, the umpteenth Star Wars or
Spiderman movie is just more slop.
Well, it matters to me, and I hope it's clear my argument
is not exactly "AI cannot make another Avengers movie".
I also hope to have shown this position is not
self-contradicting at all.
ants_everywhere wrote 1 day ago:
I usually see the opposite.
Detractors from AI often refuse to learn how to use it or argue
that it doesn't do everything perfectly so you shouldn't use it.
Proponents say it's the process and learning that builds depth and
you have to learn how to use it well before you can have a sensible
opinion about it.
The same disconnect was in place for every major piece of
technology, from mechanical weaving, to mechanical computing, to
motorized carriages, to synthesized music. You can go back and read
the articles written about these technologies and they're nearly
identical to what the AI detractors have been saying.
One side always says you're giving away important skills and the
new technology produces inferior work. They try to frame it in
moral terms. But at heart the objections are about the fear of
one's skills becoming economically obsolete.
SirHumphrey wrote 1 day ago:
> Detractors from AI often refuse to learn how to use it or argue
that it doesn't do everything perfectly so you shouldn't use it.
But here is the problem - to effectively learn the tool, you must
learn to use. Not learning how to effectively AI and then
complaining that the results are bad is building a straw-men and
then burning it.
But what I am giving away when using LLM is not skills, it's the
ability to learn those skills. Because if the LLM instead of me
is solving all easy and intermediate problems I cannot learn how
to solve hard problems. The process of digging for an answer
through documentation gives me a better understanding of how some
technology works.
Those kinds of problems existed before - programming languages
robed people of the necessity to learn assembly - high level
languages of the necessity to learn low level languages - low
code solutions of the necessity to learn how to code. Some of
these solutions (like low level and high level programming
languages) are robust enough that this trade-off makes sense -
some are not (like low code).
I think it's too early to call weather AI agents go one way or
the other. Putting eggs in both baskets means learning how to use
AI tools and at the same time still maintaining the ability to
work without them.
fsmv wrote 1 day ago:
If you assume all AI detractors haven't tried it enough then
you're the one building a straw man
ants_everywhere wrote 1 day ago:
I said often not always
seadan83 wrote 1 day ago:
All the same. There's a mixture of no-true-scotsman in the
argument that (paraphrasing) "often they did not learn to
use the tool well", and then this part is a strawman
argument:
"They try to frame it in moral terms. But at heart the
objections are about the fear of one's skills becoming
economically obsolete."
ants_everywhere wrote 12 hours 3 min ago:
I remember when I first learned the names of logical
fallacies too, but you aren't using either of them
correctly
seadan83 wrote 6 hours 29 min ago:
Then please educate me on how the logical fallacies are
misapplied.
In short, what it comes down to, is you do not know
this to be true: "Detractors from AI often refuse to
learn how to use it or argue that it doesn't do
everything perfectly so you shouldn't use it." If you
do know that to be true, please provide the citations.
Sociology is a bitch, because we like to make
stereotypes but it turns out that you really don't know
anything about the individual you are talking to. You
don't know their experiences, their learnings, their
age.
Further, humans tend to have very small sample sizes
based on their experiences. If you met one detractor
every second for the rest of the year, your experiences
would still not be statistically significant.
You can say, in your experience, in your conversations,
but as a general true-ism - you need to provide some
data. Further, even in your conversations, do you
always really know how much the other person knows? For
example, you assumed (or at least heavily implied) that
I just learned the name of logical fallacies. I'm
actually quite old, it's been a long while since I
learned the name of logical fallacies. Regardless, it
does not matter so long as the fallacies are correctly
applied. Which I think they were, and I'll defend it in
depth compared to your shallow dismissal.
Quoting from earlier:
> Detractors from AI often refuse to learn how to use
it.. you have to learn how to use it well before you
can have a sensible opinion about it.
Clearly, if you don't like AI, you just have not
learned enough about it. This argument assumes that
detractors are not coming from a place of experience.
This is an no-true-scotsman. They wouldn't be
detractors if they had more experience, you just need
to do it better! The assumption of the experience level
of detractors gives away the fallacy. Clearly
detractors just have not learned enough.
From a definition of no-true-scotsman [1], "The no true
Scotsman fallacy is the attempt to defend a
generalization by denying the validity of any
counterexamples given." In this case, the
counterexamples provided by detractors are discounted
because they (assumingly) simply have not learned how
to use AI. A detractor could say "this technology does
not work", and of course they are 'wrong' because they
don't know how to use it well enough. Thus, the
generalization is that AI is useful and the detractors
are wrong due to a lack of knowledge (and so implying
if they knew more, they would not be detractors).
-----
I'll define here that straw man is misrepresenting a
counter argument in a weaker form, and then showing
that weaker form to be false in order to discredit the
entirety of the argument.
There multiple straw man:
> The same disconnect was in place for every major
piece of technology, from mechanical weaving, to
mechanical computing, to motorized carriages, to
synthesized music. You can go back and read the
articles written about these technologies and they're
nearly identical to what the AI detractors have been
saying... They try to frame it in moral terms.
Perhaps the disconnect is actually different. I'd say
it is. Because there is no fear of job loss from AI
(from this detractor at least) these examples are not
relevant. That makes them a strawman.
> But at heart the objections are about the fear of
one's skills becoming economically obsolete.
So:
(1) The argument of detractors is morality based
(2) The argument of detractors is rooted in the fear
of "becoming economically obsolete".
I'd say the strongest arguments of detractors is that
the technology simply doesn't work well. Period. If
that is the case, then there is NO fear of "becoming
economically obsolete."
Let's look at the original statement:
> Detractors say it's the process and learning that
builds depth.
Which means detractors are saying that AI tools are bad
because they prohibit learning. Yet, now we have words
put in their mouths that the detractors actually fear
becoming 'economically obsolete' and it's similar to
other examples that did not prove to be the case. That
is exactly a weaker form of the counter argument that
is then discredited through the examples of synthesized
music, etc..
So, it's not the case that AI hinders learning, it's
that the detractors are afraid AI will take their jobs
and they are wrong because there are similar examples
where that was not the case. That's a strawman.
[1]: https://www.scribbr.com/fallacies/no-true-scot...
paulryanrogers wrote 1 day ago:
I stopped using auto complete for a while because I found that
having to search for docs and source forced me to learn the
APIs more thoroughly. Or so it seemed.
ludicrousdispla wrote 1 day ago:
>> Proponents say it's the process and learning that builds depth
and you have to learn how to use it well before you can have a
sensible opinion about it.
That's like telling a chef they'll improve their cooking skills
by adding a can of soup to everything.
Shorel wrote 1 day ago:
> But at heart the objections are about the fear of one's skills
becoming economically obsolete.
Unless I can become a millionaire just with those skills, they
are in a limbo between economically adequate and economically
obsolete.
bluefirebrand wrote 1 day ago:
> But at heart the objections are about the fear of one's skills
becoming economically obsolete.
I won't deny that there is some of this in my AI hesitancy
But honestly the bigger barrier for me is that I fear signing my
name on subpar work that I would otherwise be embarrassed to
claim as my own
If I don't type it into the editor myself, I'm not putting my
name on it. It is not my code and I'm not claiming either credit
nor responsibility for it
benreesman wrote 1 day ago:
I think you're very wise to preserve your commit handle as
something other than a shift operator annotation, not everyone
is.
I think I'm using it more than it sounds like you are, but I
make very clear notations to myself and others about what's a
big generated test suite that I froze in amber after it cleared
a huge replay event, and what I've been over a fine tooth comb
with personally. I type about the same amount of prose and code
every day as ever, but I type a lot of code into the prompt now
"like this, not like that" in a comment.
The percentage of hand-authored lines varies wildly from
probably 20% of unit tests to still close to 100℅ on io_uring
submission queue polling or whatever.
If it one shots a build file, eh, I put opus as the
meta.authors and move on.
mwcampbell wrote 1 day ago:
I wonder if it's actually accurate to attribute authorship to
the model. As I understand it, the code is actually derived
from all of the text that went into the training set. So,
strictly speaking, I guess proper attribution is impossible.
More generally, I wonder what you think about the whole
plagiarism/stealing issue. Is it something you're at all
uneasy about as you use LLMs? Not trying to accuse or argue;
I'm curious about different perspectives on this, as it's
currently the hang-up preventing me from jumping into
LLM-assisted coding.
benreesman wrote 1 day ago:
I'm very much on the record that I want Altman tried in the
Hague for crimes against humanity, and he's not the only
one. So I'm no sympathizer of the TESCREAL/EA sociopaths
who run frontier AI labs in 2025 (Amodei is no better).
And in a lot of areas it's clearly just copyright
laundering, the way the Valley always says that breaking
the law is progress if it's done with a computer (AI means
computer now in policy circles).
But on code? Coding is sort of a special case in the sense
that our tradition of
sharing/copying/pasting/gisting-to-our-buddies-fuck-the-bos
s is so strong that it's kind of a different thing. Coding
is also a special case on LLMs being at all useful over and
above like, non-spammed Google, it's completely absurd that
they generalize outside of that hyper-specific niche. And
it's completely absurd the `gpt-4-1106-preview` was better
than pre-AI/pre-SEO Google: LLM is both arsonist and
fireman like Ethan Hunt in that Mission Impossible flick
with Alex Baldwin.
So if you're asking if I think the frontier vendors have
the moral high ground on anything? No, they're very very
bad people and I don't associate with people who even work
there.
But if you're asking if I care about my code going into a
model?
[1]: https://i.ibb.co/1YPxjVvq/2025-07-05-12-40-28.png
armada651 wrote 1 day ago:
> If I don't type it into the editor myself, I'm not putting my
name on it. It is not my code and I'm not claiming either
credit nor responsibility for it
This of course isn't just a moral concern, it's a legal one. I
want ownership of my code, I don't want to find out later the
AI just copied another project and now I've violated a license
by not giving attribution.
Very few open-source projects are in the public domain and even
the most permissive license requires attribution.
thfuran wrote 1 day ago:
And, though I don't think it's nearly settled, in other areas
courts seen to be leaning towards the output of generative AI
not being copyrightable.
add-sub-mul-div wrote 1 day ago:
Unfortunately the majority don't think like this and will take
whatever shortcut allows them to go home at 5.
jchw wrote 2 days ago:
> It's interesting seeing people argue about AI, because they're
plainly not speaking about the same issue and simply talking past
each other.
It's important to realize this is actually a general truth of
humans arguing. Sometimes people do disagree about the facts on the
ground and what is actually true versus what is bullshit, but a lot
of the time what really happens is people completely agree on the
facts and even most of the implications of the facts but completely
disagree on how to frame them. Doesn't even have to be Internet
arguments. A lot of hot-button political topics have always been
like this, too.
It's easy to dismiss people's arguments as being irrelevant, but I
think there's room to say that if you were to interrogate their
worldview in detail you might find that they have coherent
reasoning behind why it is relevant from their perspective, even if
you disagree.
Though it hasn't really improved my ability to argue or even not
argue (perhaps more important), I've definitely noticed this in
myself when introspecting, and it definitely makes me think more
about why I feel driven to argue, what good it is, and how to do it
better.
w10-1 wrote 2 days ago:
Recognizing delusions is probably the highest form of wisdom. It can
help us avoid entire wasted lives.
That said, "Do-learn" sort of begs the question, and it's only a
half-step. How do you know when you're polishing a turd? Who's to say
this cycle is virtuous or vicious?
The second part is that after you drop your self-centered delusion of
seeking perfection, you actually start to find and solve other people's
problems.
It might not be pretty or fun, but that's what has value.
If you're interested in building companies, the key factor is not the
technology or even the team, but the market -- the opportunity to help.
Then it's not really your ambition: it's a need that needs filling, and
the question is whether you can find the people and means to do it, and
you'll find both the people and the means are inspired not by your
ambition, but by your vision for how to fill the need, in a kind of
self-selected alignment and mutual support.
deadbabe wrote 2 days ago:
Ambition is the enemy of consistency.
thehappyfellow wrote 2 days ago:
It's closely related to another truth:
Unconstrained curiosity is a vice, not virtue.
james-bcn wrote 1 day ago:
Sorry I don't consider this a "truth" at all.
Unconstrained curiosity is a superpower. Some of the greatest people
in history have had immense curiosity. Think Newton, Darwin, Feynman.
In fact pretty much any great scientist is great because of their
wide curiosity. It's often the crossover between things that seem
unrelated where the breakthroughs lie.
It's a joy to have "the pleasure of finding things out" and I pity
anyone who lacks it.
bGl2YW5j wrote 2 days ago:
To you maybe. People get satisfaction and purpose from different
things. Unbounded curiosity can often drive tangible outcomes too.
You might even have that curiosity to thank for methods and tools you
use in your own persuits!
jebarker wrote 2 days ago:
Especially if you’re a cat. Seriously though, I don’t like
hearing this - curiosity about all things is sort of what keeps me
getting up each morning.
SeanAnderson wrote 2 days ago:
I find it surprisingly difficult to lower my standards once I feel
committed to an idea. I wish this article leaned a little more into
ways to address that sort of dilemma.
Don't get me wrong, I agree fully with the article. I put it into
practice plenty well in many areas of my life. I've made great progress
with my diet, self-care, and physical fitness routines by keeping my
goals SMART.
And yet, a few years ago, I got this idea in my head for a piece of
software I wanted to create that is, if not too ambitious, then clearly
asking all of me and then some. The opening paragraph of the article
really resonated with me -- "The artwork that will finally make the
invisible visible."
And so, I've chipped away at the idea here and there, but I find myself
continually put off by "the gap" - even though I know it's to be
expected and is totally human.
Part of me wishes I had never dared to dream so big and wishes I could
let the idea go entirely. Another part of me is mad and ashamed for
thinking like that about a personal dream.
Anyway, don't know where I'm going with all this. Just felt like
remarking on the article since it struck close to home.
P.S. if you haven't seen the Ira Glass video, I'd take a look. It's
pretty inspirational. Here's Part 3 which is what the article was
referencing.
[1]: https://www.youtube.com/watch?v=X2wLP0izeJE
cl42 wrote 2 days ago:
In the spirit of July 4, John Lewis Gaddis explores a similar theme in
"On Grand Strategy". This is one of my favourite explorations, where he
compares Abraham Lincoln and John Quincy Adams:
> Compare Lincoln’s life with that of John Quincy Adams. Great
expectations inspired, pursued, and haunted Adams, depriving him, at
critical moments, of common sense. Overestimations by others—which he
then magnified—placed objectives beyond his reach: only self-demotion
brought late-life satisfaction. No expectations lured Lincoln apart
from those he set for himself: he started small, rose slowly, and only
when ready reached for the top. His ambitions grew as his opportunities
expanded, but he kept both within his circumstances. He sought to be
underestimated.
The point -- being too ambitious can slow you down if you're not
strategic.
strogonoff wrote 1 day ago:
Some people grow to both crave praise but also when they get it not
really value it; they want people to be always surprised at cool
stuff they can do but are not motivated to do boring uninteresting
work. This may be accompanied by one or more of: perfectionism,
narcissism, rejection anxiety, etc.
I suspect this might have to do with praise patterns in childhood.
MichaelZuo wrote 2 days ago:
It almost seems like a tautology.
e.g. By definition the 99.9th percentile person cannot live a
99.999th percentile life, if they did they would in fact be that
amazing.
cantor_S_drug wrote 1 day ago:
Can we invoke a version of 80-20 rule here, that 0.1% people will
easily capture success of 80% while subsequent marginal capture
takes increasingly more investment and luck?
jahewson wrote 1 day ago:
John Quincy Adams was arguably such a 99.999th percentile person
though.
thrwwXZTYE wrote 1 day ago:
Significant part of what separates 99.9th (or even 90th) from
99.999th percentile is ego management.
In particular IQ is not associated with better life outcomes after
you have "enough", and that "enough" isn't Mensa level.
MichaelZuo wrote 13 hours 17 min ago:
How could that possibly be true?
The former might be a literal genius (in the genuine unironic
sense) in one field, say software engineering of astrophysics or
banking or diplomacy.
The latter would be a literal genius in all four fields
simultaneously.
majormajor wrote 2 days ago:
> e.g. By definition the 99.9th percentile person cannot live a
99.999th percentile life, if they did they would in fact be that
amazing.
This seems far too deterministic and I think is contrary to what
you're replying to.
It sounds more like a 99.999th percentile person[0] that constantly
reaches too far too early, before being prepared, will not have a
99.999th percentile life. A 99th percentile person who, on the
other hand, does not constantly fail due to over-reach, can easily
end up accomplishing more. (And there are many other things that
might hold them back too - they might get hit by a car while
crossing the street.)
[0] in whatever measurement of "capability" you have in mind
MichaelZuo wrote 2 days ago:
Well the critical thing is that we can’t determine who is at
what percentile until after the fact. So for example an early
bloomer genius type, who is 99.999th percentile among everyone in
the same birth year cohort, could suddenly crash back down
towards the average.
There’s no practical way to determine that looking forwards in
time.
wrs wrote 2 days ago:
I’m very good at one thing (thank goodness), but I do some other
things that I’m not good at, to remind myself how nice it feels to
just do something without the pressure of having to be good at it.
I also think being a beginner at other things reminds me that failure
is what learning feels like, which gives me some perspective when my
“real” job feels difficult although I’m supposedly so good at i…
When I look back at big things I’ve done, they’re all the result of
just “doing the thing” for a long time and making thousands of
course corrections. Never the result of executing the perfect
crystalline plan.
eleveriven wrote 1 day ago:
There's something really valuable about stepping outside your comfort
zone and letting yourself just be bad at something
pedalpete wrote 2 days ago:
What is "too ambitious"?
Are there dreamers who overthink and never get anything done?
Absolutely!
Are there also people who do what other people regularly say is
impossible? Also an absolute yes.
Ambition has nothing to do with it. There are doers and there are
talkers.
amirmi78 wrote 9 hours 26 min ago:
It's too ambitious if it's harming the doing.
eleveriven wrote 1 day ago:
Ambition isn't bad on its own, but when it becomes a substitute for
action instead of fuel for it, that's where things go sideways
hackable_sand wrote 1 day ago:
Griffith from Berserk is "too ambitious" but yeah they got things
done I guess.
GianFabien wrote 2 days ago:
The word "ambition" comes with a variety of connotations.
>There are doers and there are talkers.
There are those who use their ambition to define a goal and then work
tirelessly to achieve it. Think of the mountaineer who plans and
trains for decades to eventually ascend Mt Everest.
Then there are those who share their ambition by talking about it.
Seeking recognition, etc for "being ambitious". Staying with the
mountaineer theme, those who refuse to climb a lesser mountain as not
being important enough to expend their precious talents upon. It is
these folks that if they somehow make enough money in some form, end
up chartering a helicopter and sherpas to climb Mt Everest.
lo_zamoyski wrote 1 day ago:
The word “ambition” is indeed vague, and this is unfortunate,
as there is a rich vocabulary full of distinction we ought to be
using. (You see the same thing when people use “passionate” as
a virtue, such as in job postings when what they mean is
“enthusiastic”. Taken literally, you certainly don’t want
passionate employees!)
In the strict sense, ambition [0] is an inordinate love of honor.
Perseverance [1], OTOH, is the ability to endure suffering in
pursuit of a good. Both effeminacy (refusal or inability to endure
suffering to attain a good) and pertinacity (obstinate pursuit of
something one should not) are opposed to perseverance.
It seems that ambition is therefore opposed to perseverance, since
it can either be effeminate (the ineffectual daydreamer that makes
big plans that he never realizes) or pertinacious (the person who
bites off more than he can chew).
Prudence [3] involves the application of right reason to action,
which itself presupposes right desire. An inordinate love of honor
is therefore opposed to prudence, because it involves an inordinate
desire. Furthermore, prudence presupposes humility [2], which
involves knowing the actual limits of your strengths and qualities
(it is not the denial of the strengths and qualities you actual
have, which is opposed to humility and a common misconception!).
Humility allows us to moderate our desires. In that sense, ambition
as an inordinate desire for honors beyond one’s reach lacks
humility.
[0] [1] [2] [3]
[1]: https://www.newadvent.org/cathen/01381d.htm
[2]: https://www.newadvent.org/summa/3138.htm#article2
[3]: https://www.newadvent.org/cathen/07543b.htm
[4]: https://www.newadvent.org/cathen/12517b.htm
GianFabien wrote 23 hours 11 min ago:
>In the strict sense, ambition [0] is an inordinate love of
honor.
I wasn't familiar with that connotation of "ambition", yet it
immediately rings the bell when thinking of many folks who loudly
and frequently talk about their "ambition"; all talk, no walk.
labrador wrote 2 days ago:
Being lazy is a clever form of productivity
“I choose a lazy person to do a hard job. Because a lazy person will
find an easy way to do it.”
― Bill Gates
<- back to front page
You are viewing proxied material from codevoid.de. The copyright of proxied material belongs to its original authors. Any comments or complaints in relation to proxied material should be directed to the original authors of the content concerned. Please see the disclaimer for more details.