Rendered at 11:19:17 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
bennettnate5 20 hours ago [-]
Incidentally, this describes what I believe to be the great difficulty of PhD research. You have to take a topic you find interesting and read all possible related work in it, which tends to result in significant scope creep as you realize just how much there is that already does you want to do. Having exhausted your initial energy and excitement for the project, you have to force yourself the remaining 20-30% of he way to the finish line to get that work to a publishable state.
sidewndr46 20 hours ago [-]
Day 1: We aim to demonstrate the effectiveness of an existing industrial catalyst in a novel application that has not seen commercial usage, potentially lowering cost of production of precursors for essential medications
Day 400: Having thoroughly described a universal theory of everything, we set out to build an experimental apparatus in orbit at a Lagrange point capable of detecting a universal particle which acts a mediator for all observable forces in the known universe.
richardw 17 hours ago [-]
That’s how I do side projects.
amarant 14 hours ago [-]
I think this is the definition of side projects.
Like, if you stay focused, is it even really a side project?
Which is why my 2d top down sprite-based rpg now has a 3d procedural animation engine, a procedural 3d character generator with automagic rigging, a population simulator that would put Europa Universalis to shame if I ever get around to finishing it (ha!) a pixel art editor, a 2d procedural animation engine using active ragdolls.........
You might wonder why a 2d game needs 3d procedural animation, well...
The scope creeps in mysterious ways
bluefirebrand 20 hours ago [-]
Damn, that's an incredible amount of progress in just 400 days
dnnddidiej 6 hours ago [-]
Notice: "We set out to build..."
gadflyinyoureye 18 hours ago [-]
That is the power of AI.
FridgeSeal 10 hours ago [-]
Don’t sell yourself short!
You could achieve things yourself if you tried!
16 hours ago [-]
Johnny_Bonk 20 hours ago [-]
Hahaha so well said, can relate during my thesis
dnnddidiej 6 hours ago [-]
This comment is screaming out fot 3 or 4 panels and some stick figures.
ryujexu 11 hours ago [-]
[dead]
wasabi991011 20 hours ago [-]
Oh man I feel that in my bones.
Any advice on how to mitigate this?
Kichererbsen 20 hours ago [-]
I worked at a chair for 12 years - in that time I've seen a lot of PhD students go through this.
If it helps anything at all: It's normal. At this point, you've already proven you're smart and knowledgeable. Now, the universe wants to see if you can also finish what you've started. That's the main thing a PhD proves: That you can take an incredibly interesting topic and then do all the boring stuff that they need you to do to be formally compliant with arbitrary rules.
Focus on finishing. Reduce the scope as much as possible again. Down to your core message (or 3-4 core messages, I guess, for paper-based dissertations).
Listen to the feedback you get from your advisor.
You got this!
wjnc 19 hours ago [-]
This is spot on. My dad was a professor and had dozens of PhDs. The only thing differentiating them (as I remember him telling me) was the resolve to keep work as /tiny/ as possible. Who is remember for his/her PhD? Only the smallest cream of the crop. He even made good fun of worthless thesis by (then) well known professors. It’s not about your PhD.
When I did my MSc thesis he told me it was a pretty good PhD. (Before giving me a months work in corrections.) I didn’t understand back then, but I understand now. It was small, replicatable and novel (still is)! Just replicate three times and be done with it. You’ve proven your mastery. Now start something serious.
collabs 15 hours ago [-]
> This is spot on. My dad was a professor and had dozens of PhDs. The only thing differentiating them (as I remember him telling me) was the resolve to keep work as /tiny/ as possible. Who is remember for his/her PhD? Only the smallest cream of the crop. He even made good fun of worthless thesis by (then) well known professors. It’s not about your PhD.
My professor once told me he presented at a small conference, the whole audience everybody had PhD in mathematics and maybe 2 of the 50 or so people in the audience could follow along. The point he was trying to make is at some point the people in the audience were not really interested in what was being presented because it is difficult to just follow along some really niche topic.
brandall10 15 hours ago [-]
There was a book I read a couple years back called "Mathematica: A Secret World of Intuition and Curiosity", by David Bessis.
He discussed this topic and how generally it's left to those who are more notable in a field to ask the 'dumb' questions everyone else is afraid to ask. And such questions often need to be asked to get the audience on board and open the floodgates with areas of niche research - the speaker themself is often too far into the rabbit hole to discern the difference between opaque and obvious.
So it stands to reason, at smaller conferences this would be a big problem, with fewer thought leaders in attendance whose reputations are intact enough that they wouldn't mind looking foolish.
wanderingmind 5 hours ago [-]
Technical feedback yes, but always reject any career feedback from your advisor since the data shows it's unlikely a good model for future career success
stathibus 15 hours ago [-]
> Focus on finishing. Reduce the scope as much as possible again.
in my field this would be terrible advice. instead you need to be doing something that your audience actually will give a shit about.
lazyasciiart 7 hours ago [-]
If you’ve spent a significant amount of time widening the scope as far as possible to include everything interesting about your original question, and there is nothing in that whole widened scope that the audience will give a shit about, your topic is unsaveable and your advisor is a failure.
If there is something interesting enough to qualify, then reduce the scope as much as possible. It should go without saying that you shouldn’t throw out the interesting bit.
arethuza 20 hours ago [-]
It's been a long long time since I was the academic research world - but isn't 3 published papers pretty much the expectation for a PhD quantity of research?
noelwelsh 19 hours ago [-]
Really depends on the field. Computer science research usually has pretty short cycle times. If you're working on, say, biology or anthropology, collecting data can take substantially longer.
godelski 18 hours ago [-]
Switch back and forth between trying and reviewing. Often it can be good to just try before reviewing, to get your feet wet. Don't spend too much time. Then when reviewing you're going to understand it more. Repeat this process.
But there's some things to remember that are incredibly important
- a paper doesn't *prove* something, it suggests it is *probably* right
- under the conditions of the paper's settings, which aren't yours
- just because someone had X outcome before doesn't mean you won't get Y outcome
- those small details usually dominate success
- sometimes a one liner seemingly throw away sentence is what you're missing
- sometimes the authors don't know and the answer is 5 papers back that they've been building on
- DO NOT TREAT PAPERS AS *ABSOLUTE* TRUTH
- no one is *absolutely* right, everyone is *some* degree of wrong
- other researchers are just like you, writing papers just like you
- they also look back at their old papers and say "I'm glad I'm not that bad anymore"
- a paper demonstrating your idea is a positive signal, you're thinking in the right direction
As soon as you start treating papers as "this is fact" you tend to overly generalize the results. But the details dominate so you just kill your own creativity. You kill your own ideas before you know they're right or wrong. More importantly you don't know how right or how wrong.
gopher_space 12 hours ago [-]
Your bullet points explain most of the replication crisis, from my perspective.
godelski 10 hours ago [-]
They're definitely deeply related. For example, a lot of works get rejected over "novelty" issues. Well, if success and/or failure depend on something seemingly small then it will almost never get through review because it seems like low novelty. Though it'll get through review if authors are convincing enough, which often leads to some minor exaggerations.
Combine that with the publish-or-perish paradigm and I think we got significant coverage. People don't even consider diving deeper into things and are encouraged to take the route of "assume paper is correct" because that's the fastest way to push out research. But if the foundation is shaky, then everything built on it is shaky too.
Which, that's a distinction in the hard and more formal fields like math and physics. They have no issues pushing out papers that may have errors in them because the process is to attack works as hard as possible. Then whatever is left is where you build again. You definitely have people take advantage of this, like Avi Loeb publishing about aliens, but it is realistically a small price to pay. And hey, even Loeb's work still contributes. If at some point it actually is aliens, then there's work existing that can be built upon. And when it continues to not be aliens, there's existing work to build on since really his problem is more that the papers just end up concluding "and this is why we can't rule out aliens!" (-__-)
Anyways, long story short, my advice is to just remember that you, and everybody else, is a blubbering idiot and it is a absolute fucking miracle a bunch of mostly hairless apes can even communicate, let alone postulate about the cosmos. At the end of the day we're all on the same team, seeking truth. Truth matters more than our egos and if we start to forget how dumb we are then we'll only hinder our pursuit of truth.
exidex 20 hours ago [-]
My choice is to not do a PhD and just invest as much or as little effort in the topic as you like
20 hours ago [-]
bennettnate5 18 hours ago [-]
For me, it wasn't so much about mitigating this cycle as much as recognizing that the grit of pushing through that last 20-30% is actually a valuable life skill that the PhD could teach me to do, and that projects that I felt like I would never want to touch again actually started to become interesting again after I had left them for a year or so.
ericmcer 18 hours ago [-]
It seems almost inevitable...
Acknowledge it is normal? Attempt to buy deeper into the delusion ("Yeah my work is awesome and unique!"). Use stimulants to force enthusiastic days every once in awhile?
thechao 14 hours ago [-]
Find a brand new hire who wants to get tenure. Getting a PhD through in 4 years is catnip for tenure at most universities (stateside). We then dropped off my dissertation in the middle of NSF funding week. I paid for it during orals (4 hours), but they all signed within a few days without comment.
Uhh... unless you plan to stay in academia? Then, this is a terrible idea.
20 hours ago [-]
SecretDreams 14 hours ago [-]
This, all while battling the increasingly heavy burden of regret towards having started a PhD in the first place.
AndrewKemendo 17 hours ago [-]
The majority of PhD candidates deal with this because the point of a PhD is to prove you can to “normal science” [1] which boils down to “how do I make this system go from 1% observable to 1.001% observable” which is just a gate for being in the academic career field.
You’ll almost never see a PhD thesis that has anything particularly interesting, novel or directly applicable to the sciences.
> You have to take a topic you find interesting and read all possible related work in it
This is definitely the wrong way of going about a research project, and I have rarely seen anyone approach research projects this way. You should read two or at most three papers and build upon them. You only do a deep review of the research literature later in the project, once you have some results and you have started writing them down.
bennettnate5 18 hours ago [-]
The usual justification is that if you don't do at least a breadth-first literature review, you can get burned by missing a paper that already does substantially what you do in your work. I've heard of extreme case where it happens a week before someone goes to defend their dissertation!
1718627440 18 hours ago [-]
Excuse my naivety, but isn't it good if the same results get proofed in slightly different ways? This is effectively a replication, but instead of just the appliance of the experiments, you also replicate the thought process by having a slightly different approach.
jacinda 18 hours ago [-]
It would be good (especially with the replication crisis), but historically to earn a PhD, especially at a top-tier institution, the criteria is conducting original research that produces new knowledge or unique insights.
Replicating existing results doesn't meet that criteria so unknowingly repeating someone's work is an existential crisis for PhD students. It can mean that you worked for 4-6 years on something that the committee then can't/won't grant a doctorate for and effectively forcing you to start over.
Theoretically, your advisor is supposed to help prevent this as well by guiding you in good directions, but not all advisors are created equal.
Apocryphon 17 hours ago [-]
And here we once again see an example of misaligned incentives baked into another one of our most hallowed institutions.
antonvs 16 hours ago [-]
The problem is that what the “hallowed institutions” are trying to do is extremely ridiculous: turn the kind of work that scientific geniuses did into something that can be replicated by following a formula.
It’s as if a committee of middle managers got together and said, “how can we replicate and scale the work of people like Einstein?”
majormajor 8 hours ago [-]
> The problem is that what the “hallowed institutions” are trying to do is extremely ridiculous: turn the kind of work that scientific geniuses did into something that can be replicated by following a formula.
> It’s as if a committee of middle managers got together and said, “how can we replicate and scale the work of people like Einstein?”
Or are they trying to require enough rigor and discipline so that out of 100,000 people who want to be the next Einstein, the process washes out the 99,000 who aren't willing or able to do more than throw out half-baked 'creative' ideas and expect the world to pick them up and run with them.
There's only finite attention and money for funding research, so you gotta do SOMETHING to filter out the larpers who want to take it and faff around.
I think at this point the system has eaten its own tail a bit, but there's good reason to require some level of "show me" before getting given the money to run your own research.
Xirdus 18 hours ago [-]
For the humanity? Yes, it's generally good. For that particular researcher's career? Not really. Who wants to pay for research into something that's already known?
1718627440 18 hours ago [-]
My imagination was leaning more into the educational side than the research side of university. I see how that wouldn't be appreciated by a patron, but when you get search grants, isn't the topic discussed before starting and paying for the research? Also that is kind of the point, why topics are cleared with the chair-holding professor, which is expected to be already experienced in the subject to know where the knowledge needs to be expanded.
jcelerier 16 hours ago [-]
Well, if you don't care about not being able to do your defense after 4 years of work because someone managed to do it just before you..
ChromaticPanic 16 hours ago [-]
Unless you're already an expert in the topic a literature search is literally step 1 since you have to check if your idea has already been done before.
abdullahkhalids 13 hours ago [-]
That's where your supervisor comes in. In most cases, they should be an expert in the field, and guide you towards a useful and novel problem.
Moreover, I am not suggesting you don't look at other papers at all. But google scholar and some quick skimming of abstracts and papers you find should suffice to check if someone has already done the work. If you start fully reading more than a handful of papers, your ideas are already locked in by what others have done, and it becomes way harder to produce something novel.
tra3 19 hours ago [-]
In one of his speeches, Obama said "Better is good". I think about this a lot. It feels like better compounds over time, too. Small improvements add up. From experience, nothing new is perfect the first go round, so sitting around trying to come up with a perfect design is counterproductive because there's no such thing.
"impediment to action advances action. what stands in the way, becomes the way".
zoogeny 14 hours ago [-]
A saying I've come across is: "Don't let perfect be the enemy of good"
I had a coworker who would always be diplomatic about code changes he felt could be improved but when he felt he was nitpicking, where he would say: It's better than it was. It allowed him to provide criticism while also giving permission to go ahead even if there were minor things that weren't perfect. I strongly endorse this kind of attitude.
flutas 13 hours ago [-]
Hmm, in every team I've been in (only 3 tbf) we almost all followed the "nit" approach for PRs.
nit: this could be changed to XYZ
vs
we should use XYZ here
where it was understood nits could be ignored if you didn't feel it was an urgent thing vs a preference.
zoogeny 13 hours ago [-]
It's worth noting that this is a kind of different "nit" than something that might be attached to a line of code. Like, someone might "nit" using a bunch of if statements where a switch statement might work, or if someone uses a `for each` where a `thing.map` would do.
What I am describing would be something higher level, more like a comment on approach, or an observation that there is some high-level redundancy or opportunity for refactor. Something like "in an ideal world we would offload some of this to an external cache server instead of an in-memory store but this is better than hitting the DB on every request".
That kind of observation may come up in top-level comment on a code review, but it might also come up in a tech review long before a line of code has been written. It is about extending that attitude to all aspects of dev.
lazyasciiart 7 hours ago [-]
I had someone reject my code that improved/regularized half a dozen instances of a domain object we had, where they were showing up in code paths I cared about. He said there’s dozens of these, don’t submit this unless you fix them all.
YZF 6 hours ago [-]
I had something similar but convinced the other person the rest of the work can be done later. Then the person went ahead and did it despite the other instances having no use/value. Go figure. I guess having consistency has some value to argue the other side. I tend to be extremely flexible in terms of allowing different ways of doing things but some seem to confuse form with function insisting on some "perfection" in the details. I think this is partly why we get these very mixed reactions to AI where LLMs aren't quite "right" (despite often producing code that functions as well as human written code).
strogonoff 1 hours ago [-]
Consistency reduces the mental cost of acquiring and maintaining an understanding of a system. In a real sense, moving from one approach to two different approaches, even if one of them is slightly better than the original one, can be a downgrade.
YZF 6 hours ago [-]
But then you end up with nit inflation, people feel like they need to fix the nits, and do, and there's no meaning to nit any more. I try to just not comment unless I feel there is some learning from the nit.
jiggawatts 14 hours ago [-]
I have a crippling guilt about not keeping my apartment as spotlessly clean as my parents did theirs, to the point that I end up procrastinating, which just makes it worse.
The trick to overcoming this is not to aim for "clean" but for "cleaner than before".
Just keep chipping away at it, whether it is a messy codebase or a messy kitchen.
zoogeny 13 hours ago [-]
I use it for cleaning all the time. Whenever I have dishes, I always give myself permission to do as little as I want knowing that one clean dish is better than nothing. Most often I end up doing them all.
The other saying I say is "completion not perfection". That helps me in yard work especially. I'm not going for the cover shot of "Better Homes and Gardens", I just need the lawn to be cut.
Waterluvian 14 hours ago [-]
I call it “sweeping back the desert.”
The sand blows in endlessly. You don’t aim for a pristine, sandless land. But you can’t ignore it or it takes over.
I’ll just pick up a few things and ferry them towards their “home.” Or go do a small amount of yard work. Etc.
YZF 6 hours ago [-]
weeding the garden is another analogy.
astrobe_ 2 hours ago [-]
The thing is, "better" is an ambiguous word. I can change a program in some way and make it smaller. I can change it in some other way and make it faster. Both are "better", but in different ways. More often than not, however, you can't have both smaller and faster - or else your are just fixing a performance bug. Often even improving one property makes some other property less good, as you can see in the numerous "pick two" rules.
So "better" means "more specialized" more often that it means "more optimized". I don't say it is a bad thing per se, but it is best to keep in mind that they are two types of improvement, fixes and specializations, because the latter is a commitment.
nonethewiser 17 hours ago [-]
It's perfectionism.
I always thought perfectionism meant extremely high achievements (for too great of a cost). But it can also be quitting without any progress because you can't accept anything less than perfect (which may or may not be achievable). Perfectionism can be someone procrastinating on a large task.
tt_dev 19 hours ago [-]
Obama - what a time to be alive
tyleo 19 hours ago [-]
Our CEO at Rec Room put this a way I really like, "Teams are always telling me they wish they did shorter projects. I've almost never heard a team say, 'we wish we delayed launch, did something more complex, polished more'"
I don't think it holds in 100% of situations but I do think if you're going to make an error one way or the other, I'd rather do something smaller and release too early than do something bigger and waste time.
YZF 6 hours ago [-]
There's features and there is quality and there is domain.
I worked on a team that built high precision industrial machinery. The team and the project manager decided to delay shipping because there were still problems. We delayed, fixed the problems, and the machine worked really well and was used for at least a decade. If we'd had shipped it too soon we would have to try and fix it at a remote site and likely it would suffer from problems.
With most products you want to figure out what is your MVP (minimal viable product) and what is the quality level your customers expect. If you ship something less than that it's probably not a good tradeoff. If you build too much and ship too late that's also not a good tradeoff. When shipping increments they also need to be appropriately sized and with the right quality level.
mcontrac 20 hours ago [-]
I think the author is really just getting at the fact that humans are by nature intelligent and by nature tend to think of similar ideas. So you can either unknowingly complete a project which is inevitably in some sense a replication of another project, or you can do the research first and realize it's partially a replication which is a bit disheartening. I think the solution might lie in realizing that completing a project for the sake of your own learning might be the most important factor. (This is easier said than done is when you are trying to complete novel academic research or when you are trying to make a profit off of your unique project.) But those, too, are more than forgiving to research that seem only to slightly tweak something that already exists.
eagerpace 16 hours ago [-]
We all just need a little more sodium in our diets.
CamelCaseCondo 4 hours ago [-]
We all just need a little more iodine in our sodium.
dgb23 18 hours ago [-]
I'm _exactly_ in this situation right now with a side project.
It's in a field that I have little experience with (Information Retrieval). So there is obviously prior art that I could learn from or even integrate with.
This article motivates me further to learn things by focusing on building my own and peek into prior art as I go, when I'm stuck or need ideas.
Recently a Clojure documentary came out and the approach of Rich Hickey was seemingly the opposite: Deep research of prior art, papers, other languages over a long period of time.
However, he also mentioned that he made other languages before. So the larger story starts earlier, by making things and learning from practice.
Maybe that's also the bigger lesson: Don't overthink, start by making the thing. But later when you learned a bunch of practical lessons and maybe hit a wall or two, then you might need that deeper research to push further.
drivers99 17 hours ago [-]
> Recently a Clojure documentary came out and the approach of Rich Hickey was seemingly the opposite: Deep research of prior art, papers, other languages over a long period of time.
That was also on my mind thanks to the documentary. Then I followed up with "Easy made Simple" and "Hammock Driven Development", and it makes me want to learn Clojure.
It's a really good language that is worth learning. If you like you can join the slack that is linked on clojure.org. Beginners are very welcome in my experience and there are a ton of great people around there.
imrozim 1 hours ago [-]
Built a 5 startups and overthinking can killed the most of them I'd spend weeks on researching instead of just shipping the self story hit me hard sometimes you just want to build the ugly version 1 st.
radley 11 hours ago [-]
Same game, different approaches:
Sometimes you just want to button-mash through, rushing about carefree.
Other times, you want to go entirely stealth, wandering around, trying to find the best path, wasting an hour or more on a level you could have button-mashed in 5 minutes.
Both are fine.
omoikane 18 hours ago [-]
I found that setting deadlines solves most scope creep problems. Anecdotally, I am more likely to complete a project for a game jam or programming contest (which come with hard deadlines) than finishing an open-ended project.
See also "Why does the C++ standard ship every three years" (as opposed to ship when the desired features are ready):
> Perhaps there’s some kind of conservation law here: Any increases in programming speed will be offset by a corresponding increase in unnecessary features, rabbit holes, and diversions.
Great explanation for what I see when I mess around with coding LLMs. The natural human instinct of “this feels complicated, let me think about it some more” is suppressed. So far all the gains from the stunning initial speed have been cancelled out later in the project, arising from the over-engineered complexity baked into the code.
wisemanwillhear 20 hours ago [-]
Over planning and scope creep are a problem, but let's not swing the pendulum to far the other way. Some of my most successful projects were projects where I planned out and worked through most of the features ahead of time through the process of modeling my data without any working software to try out. When I'm in that phase, I often don't really know what is too much. If I leave out features I think I or the users will probably want, I spend a lot of time with significant redesign of core aspects of the code. If I'm wrong, the project gets too big and we chalk it up to scope creep.
My ability to get this right is often a matter of how well I know the domain. If I don't know the domain as well I think I do, I fall into a lot of rework. If I know the domain more than I imagine then I waste my time with a baby step process when I could have run. All of this is a big judgement call, and I have "regrets" in both directions.
asdfman123 19 hours ago [-]
I think the ideal solution is to spend a lot of time in the analysis phase to load your brain up with the correct context, but then be ready to throw out the overengineered solution and just build what feels right.
Don't fall prey to sunk cost fallacy. Just because you spent hours researching a PhD level topic doesn't mean you now have to use it in your project, if it's not quite the right application.
goalieca 19 hours ago [-]
You worry too much about being wrong. Just try something and adjust as needed.
1-6 21 hours ago [-]
Interesting read but the author's thoughts were all over the place.
LPisGood 20 hours ago [-]
There is something to be said about scope creep here
hendersonreed 20 hours ago [-]
This isn't a blogpost with a particular focus, it's a newletter update for people who follow this person.
JSR_FDED 10 hours ago [-]
It covered multiple topics, that’s not the same as all over the place.
balamatom 21 hours ago [-]
[flagged]
gblargg 7 hours ago [-]
I try to do the first version in a minimal way to just try the concept. If it works it will be useful and justify an improved version, and will be a good test ground for elements that will go into the improved version.
AkiraHsieh 6 hours ago [-]
Experienced this building a decision support framework - started simple, ended up with dual architecture and 18 languages. Sometimes scope creep reveals the real problem you're solving.
SuperSixFour 10 hours ago [-]
The fact that the author was confident enough to start the article with a picture of the bins they made and they include seemingly 3x the same thing (oats) and an entire container of ice cream cones, while the only actual ingredient they have is flour and its at the end, makes me question the validity of their argument
philipnee 19 hours ago [-]
Firstly - Greetings! It’s so rare to see a Clojure person in the wild! and secondly, I really resonated with this! it feels like we, computer programmer, typically overthink too much to begin with, and then LLMs come along and actually help us overthink even more!
mockbolt 20 hours ago [-]
This is a pretty common failure mode in engineering too.
You start with a simple goal → then research → then keep expanding scope → and never ship.
The people who actually finish things do the opposite:
lock scope early, ignore “better ideas”, ship v1.
Most projects don’t fail due to lack of ideas, they fail because they never converge.
sebastianperezr 16 hours ago [-]
I am a solo builder and one thing that helped me a lot was this: most of what looks like a "necessary abstraction" is actually scope creep with a different name. I was adding a flag for every new feature and I noticed the pattern in my own code, so I made one rule: a
feature can not land until its flag-off behavior has a test. That changed how I see flags. The flag is part of the product, not an escape hatch. Three features in my backlog died by themselves when I started thinking like this.
kxcrossing 14 hours ago [-]
For someone who says they are overcome by scope creep, they sure do seem to get a lot done — so many linked articles on all sorts of topics at the end of the post.
I think the author is indeed someone who just really enjoys learning and doing all sorts of things, so the rabbitholing is part of the fun that tickles their brain.
w10-1 19 hours ago [-]
Sabotage is intentional, but the problem is unintended excursions, which is endemic to any scouting.
The real problem is avoidance, when cuts are warranted and you don't want them, so you ... hide, often by working hard on something else.
The solution is to value your time. Most don't, so (self-) managers instead need to dangle other opportunities: finish this so you can do that. You can't take candy from a baby without trouble; instead, you trade for something else.
giladd 20 hours ago [-]
> Perhaps there’s some kind of conservation law here: Any increases in programming speed will be offset by a corresponding increase in unnecessary features, rabbit holes, and diversions.
This resonates hard. LLMs enable true perfectionism, the ability to completely fulfil your vision for a project. This lets you add many features without burning out due to fatigue or boredom. However (as the author points out), most projects' original goal does not require these complementary features.
rafram 20 hours ago [-]
I think this should've been two separate blog posts.
chatmasta 20 hours ago [-]
Yeah, it’s funny how all the comments so far are only talking about the over-engineering and scope creep, when the bulk of the blog was dedicated to a totally separate rant (but a good one!) on structural diffing.
Strom 14 hours ago [-]
Kind of hilarious though that it talks about scope creep and then transitions into a whole different long topic.
sambaumann 20 hours ago [-]
Looks like this was a newsletter by the author, not a blogpost
The newest structural diff tool is RefactoringMiner, there's a paper and a Github repo that works out of the box which is rare for this space. Excellent results but mainline is limited to Java IIRC with a couple ports for other languages.
quarkz14 20 hours ago [-]
Definitely have found myself in a similar situation in fact most of the times option 2 happens. I too have caught myself just thinking rather than building and glad I am not the only one who repeatedly tells himself I should just build it rather than enter the rabbit hole of what is out there.
ljm 20 hours ago [-]
I feel for this a lot, but it's because I don't want to actually write code or build something if there is something workable already out there.
Maybe I lack imagination or curiosity, but it makes it difficult to come up with an idea and follow it through.
hirako2000 17 hours ago [-]
My answer is both #1 and #2
Prototype a minority of the time. Research a majority of the time. At some point the ratio flips as research fades out and producing increases.
1. Insist on doing everything through “channels.” Never permit short-cuts to be taken in order to expedite decisions.
2. Make “speeches,” Talk as frequently as possible and at great length. Illustrate your “points” by long anecdotes and accounts of personal experiences.
3. When possible refer all matters to committees, for “further study and consideration”. Attempt to make the committees as large as possible – never less than five.
4. Bring up irrelevant issues as frequently as possible.
5. Haggle over precise wordings of communications, minutes, resolutions.
6. Refer back to matters decided upon at the last meeting and attempt to re-open the question of the advisability of that decision.
7. Advocate “caution.” Be “reasonable” and urge your fellow-conferees to be “reasonable” and avoid haste which might result in embarrassments or difficulties later on.
8. Be worried about the propriety of any decision – raise the question of whether such action as is contemplated lies within the jurisdiction of the group or whether it might conflict with the policy of some higher echelon.
Managers and Supervisors:
1. Demand written orders.
2. “Misunderstand” orders. Ask endless questions or engage in long correspondence about such orders. Quibble over them when you can.
3. Do everything possible to delay the delivery of orders. Even though parts of the order may be ready beforehand, don’t deliver it until its completely ready.
4. Don’t order new working materials until your current stocks have been virtually exhausted, so that the slightest delay in filling your order will mean a shutdown.
5. Order high-quality materials which are hard to get. If you don’t get them argue about it. Warn that inferior materials will mean inferior work.
6. In making work assignments, always sing out the unimportant jobs first. See that important jobs are assigned to inefficient workers with poor equipment.
7. Insist on perfect work in relatively unimportant products send back for refinishing those which have the least flaws. Approve other defective parts whose flaws are not visible to the naked eye.
8. Make mistakes in routing so that parts
and materials will be sent to the wrong place in
the plant.
9. When training new workers, give incomplete or misleading instructions.
10. To lower moral and with it production, be pleasant to inefficient workers; give them undeserved promotions. Discriminate against efficient workers; complain unjustly about their work.
11. Hold meetings when there is critical work to be done.
12. Multiply paperwork in plausible ways. Start duplicating files.
13. Multiply the procedures and clearances involved in issuing instructions, making payments, and so on. See that three people have to approve everything where one would do.
14. Apply all regulations to the last letter.
robertcope 11 hours ago [-]
Hah, came here to make sure someone had mentioned this! One of my favorites.
dgb23 18 hours ago [-]
This reads like satire!
33MHz-i486 19 hours ago [-]
also if youre in a large organization, this is a great way to sabotage other peoples projects while elevating your stature. Require that they go evaluate alternatives and prior art, and write a slew of analysis and decision documentation
utopiah 20 hours ago [-]
I mean if you don't reconsider the foundation of computer science, mathematics or what even is information, can you truly be building a cool CRM?
sfink 15 hours ago [-]
It's like those ridiculous people who try to make a PBJ without knowing anything about glycemic indexes, peanut smut, or the historical origins of breadmaking.
Kids these days just want to use prefab libraries and frameworks with a million dependencies doing god knows what and written by randos.
(Unrelated to how commenters these days just want an excuse to use the term "peanut smut".)
mystraline 11 hours ago [-]
This is also a great way to sabotage a company from the inside.
This technique is called out in the CIA simple field sabotage manual.
danaw 19 hours ago [-]
i feel a lot are missing the point here of identifying the "why" in why you want to build a project.
do you want to learn a new skill? do you want to scratch a very specific personal itch for just yourself? do you want to solve problems for others as well? do you want to build a startup/business around the idea?
all of these necessitate different approaches and strategies to research and coding. scratching an itch? maybe fully vibe coding is fine. want to learn? ditch the vibes and write by hand and ignore prior art. want to build a business? do some actual market research first and decide if this is something you actually want to pursue.
this post was a good reminder for me to identify the why as early on as possible and to be ok with just building something for myself without always having to monetize a side project which, for me, just zaps all joy from it.
jwpapi 10 hours ago [-]
Sounds like AI
ascii0eks84 18 hours ago [-]
Scope creep is scary when you have the wrong pretext: to "just" implement a small feature or a project, when in reality the prerequisites to do so are enormous.
brador 9 hours ago [-]
Add flip flopping - undecided on language/engine/toolset for weeks to decades.
voidhorse 17 hours ago [-]
As usual it's not so black and white and is all about balance.
Project where the sole user is you in your kitchen? Sure, hack it together.
Project where you actually want other people to use the product? A research phase matters and helps here.
Consider what the goal is and the amount of effort to invest typically becomes more evident.
sylware 18 hours ago [-]
Just code using c++ (or in a language with a similar syntax complexity or a massive runtime,
java, microsoft rust, etc).
It gets even better with ISO regular feature creep: you'll find always a dev
to manage to make hard dependent on the latest "standard".
Basically, you will end up dependent on the massive complexity of a compiler
due to the syntax complexity, and the cherry on top, thanks to ISO,
you'll get feature creep creating a cycle of planned obsolescence around 5 to 10 years.
Oh, sorry, "they" called that "innovation".
ascii0eks84 18 hours ago [-]
Scope creep is scary when you have the wrong pretext.
Day 400: Having thoroughly described a universal theory of everything, we set out to build an experimental apparatus in orbit at a Lagrange point capable of detecting a universal particle which acts a mediator for all observable forces in the known universe.
Like, if you stay focused, is it even really a side project?
Which is why my 2d top down sprite-based rpg now has a 3d procedural animation engine, a procedural 3d character generator with automagic rigging, a population simulator that would put Europa Universalis to shame if I ever get around to finishing it (ha!) a pixel art editor, a 2d procedural animation engine using active ragdolls.........
You might wonder why a 2d game needs 3d procedural animation, well...
The scope creeps in mysterious ways
You could achieve things yourself if you tried!
Any advice on how to mitigate this?
If it helps anything at all: It's normal. At this point, you've already proven you're smart and knowledgeable. Now, the universe wants to see if you can also finish what you've started. That's the main thing a PhD proves: That you can take an incredibly interesting topic and then do all the boring stuff that they need you to do to be formally compliant with arbitrary rules.
Focus on finishing. Reduce the scope as much as possible again. Down to your core message (or 3-4 core messages, I guess, for paper-based dissertations).
Listen to the feedback you get from your advisor.
You got this!
When I did my MSc thesis he told me it was a pretty good PhD. (Before giving me a months work in corrections.) I didn’t understand back then, but I understand now. It was small, replicatable and novel (still is)! Just replicate three times and be done with it. You’ve proven your mastery. Now start something serious.
My professor once told me he presented at a small conference, the whole audience everybody had PhD in mathematics and maybe 2 of the 50 or so people in the audience could follow along. The point he was trying to make is at some point the people in the audience were not really interested in what was being presented because it is difficult to just follow along some really niche topic.
He discussed this topic and how generally it's left to those who are more notable in a field to ask the 'dumb' questions everyone else is afraid to ask. And such questions often need to be asked to get the audience on board and open the floodgates with areas of niche research - the speaker themself is often too far into the rabbit hole to discern the difference between opaque and obvious.
So it stands to reason, at smaller conferences this would be a big problem, with fewer thought leaders in attendance whose reputations are intact enough that they wouldn't mind looking foolish.
in my field this would be terrible advice. instead you need to be doing something that your audience actually will give a shit about.
If there is something interesting enough to qualify, then reduce the scope as much as possible. It should go without saying that you shouldn’t throw out the interesting bit.
But there's some things to remember that are incredibly important
As soon as you start treating papers as "this is fact" you tend to overly generalize the results. But the details dominate so you just kill your own creativity. You kill your own ideas before you know they're right or wrong. More importantly you don't know how right or how wrong.Combine that with the publish-or-perish paradigm and I think we got significant coverage. People don't even consider diving deeper into things and are encouraged to take the route of "assume paper is correct" because that's the fastest way to push out research. But if the foundation is shaky, then everything built on it is shaky too.
Which, that's a distinction in the hard and more formal fields like math and physics. They have no issues pushing out papers that may have errors in them because the process is to attack works as hard as possible. Then whatever is left is where you build again. You definitely have people take advantage of this, like Avi Loeb publishing about aliens, but it is realistically a small price to pay. And hey, even Loeb's work still contributes. If at some point it actually is aliens, then there's work existing that can be built upon. And when it continues to not be aliens, there's existing work to build on since really his problem is more that the papers just end up concluding "and this is why we can't rule out aliens!" (-__-)
Anyways, long story short, my advice is to just remember that you, and everybody else, is a blubbering idiot and it is a absolute fucking miracle a bunch of mostly hairless apes can even communicate, let alone postulate about the cosmos. At the end of the day we're all on the same team, seeking truth. Truth matters more than our egos and if we start to forget how dumb we are then we'll only hinder our pursuit of truth.
Acknowledge it is normal? Attempt to buy deeper into the delusion ("Yeah my work is awesome and unique!"). Use stimulants to force enthusiastic days every once in awhile?
Uhh... unless you plan to stay in academia? Then, this is a terrible idea.
You’ll almost never see a PhD thesis that has anything particularly interesting, novel or directly applicable to the sciences.
[1] https://en.wikipedia.org/wiki/Normal_science
This is definitely the wrong way of going about a research project, and I have rarely seen anyone approach research projects this way. You should read two or at most three papers and build upon them. You only do a deep review of the research literature later in the project, once you have some results and you have started writing them down.
Replicating existing results doesn't meet that criteria so unknowingly repeating someone's work is an existential crisis for PhD students. It can mean that you worked for 4-6 years on something that the committee then can't/won't grant a doctorate for and effectively forcing you to start over.
Theoretically, your advisor is supposed to help prevent this as well by guiding you in good directions, but not all advisors are created equal.
It’s as if a committee of middle managers got together and said, “how can we replicate and scale the work of people like Einstein?”
> It’s as if a committee of middle managers got together and said, “how can we replicate and scale the work of people like Einstein?”
Or are they trying to require enough rigor and discipline so that out of 100,000 people who want to be the next Einstein, the process washes out the 99,000 who aren't willing or able to do more than throw out half-baked 'creative' ideas and expect the world to pick them up and run with them.
There's only finite attention and money for funding research, so you gotta do SOMETHING to filter out the larpers who want to take it and faff around.
I think at this point the system has eaten its own tail a bit, but there's good reason to require some level of "show me" before getting given the money to run your own research.
Moreover, I am not suggesting you don't look at other papers at all. But google scholar and some quick skimming of abstracts and papers you find should suffice to check if someone has already done the work. If you start fully reading more than a handful of papers, your ideas are already locked in by what others have done, and it becomes way harder to produce something novel.
"impediment to action advances action. what stands in the way, becomes the way".
I had a coworker who would always be diplomatic about code changes he felt could be improved but when he felt he was nitpicking, where he would say: It's better than it was. It allowed him to provide criticism while also giving permission to go ahead even if there were minor things that weren't perfect. I strongly endorse this kind of attitude.
What I am describing would be something higher level, more like a comment on approach, or an observation that there is some high-level redundancy or opportunity for refactor. Something like "in an ideal world we would offload some of this to an external cache server instead of an in-memory store but this is better than hitting the DB on every request".
That kind of observation may come up in top-level comment on a code review, but it might also come up in a tech review long before a line of code has been written. It is about extending that attitude to all aspects of dev.
The trick to overcoming this is not to aim for "clean" but for "cleaner than before".
Just keep chipping away at it, whether it is a messy codebase or a messy kitchen.
The other saying I say is "completion not perfection". That helps me in yard work especially. I'm not going for the cover shot of "Better Homes and Gardens", I just need the lawn to be cut.
The sand blows in endlessly. You don’t aim for a pristine, sandless land. But you can’t ignore it or it takes over.
I’ll just pick up a few things and ferry them towards their “home.” Or go do a small amount of yard work. Etc.
So "better" means "more specialized" more often that it means "more optimized". I don't say it is a bad thing per se, but it is best to keep in mind that they are two types of improvement, fixes and specializations, because the latter is a commitment.
I always thought perfectionism meant extremely high achievements (for too great of a cost). But it can also be quitting without any progress because you can't accept anything less than perfect (which may or may not be achievable). Perfectionism can be someone procrastinating on a large task.
I don't think it holds in 100% of situations but I do think if you're going to make an error one way or the other, I'd rather do something smaller and release too early than do something bigger and waste time.
I worked on a team that built high precision industrial machinery. The team and the project manager decided to delay shipping because there were still problems. We delayed, fixed the problems, and the machine worked really well and was used for at least a decade. If we'd had shipped it too soon we would have to try and fix it at a remote site and likely it would suffer from problems.
With most products you want to figure out what is your MVP (minimal viable product) and what is the quality level your customers expect. If you ship something less than that it's probably not a good tradeoff. If you build too much and ship too late that's also not a good tradeoff. When shipping increments they also need to be appropriately sized and with the right quality level.
It's in a field that I have little experience with (Information Retrieval). So there is obviously prior art that I could learn from or even integrate with.
This article motivates me further to learn things by focusing on building my own and peek into prior art as I go, when I'm stuck or need ideas.
Recently a Clojure documentary came out and the approach of Rich Hickey was seemingly the opposite: Deep research of prior art, papers, other languages over a long period of time.
However, he also mentioned that he made other languages before. So the larger story starts earlier, by making things and learning from practice.
Maybe that's also the bigger lesson: Don't overthink, start by making the thing. But later when you learned a bunch of practical lessons and maybe hit a wall or two, then you might need that deeper research to push further.
That was also on my mind thanks to the documentary. Then I followed up with "Easy made Simple" and "Hammock Driven Development", and it makes me want to learn Clojure.
Clojure documentary on CultRepo channel: https://www.youtube.com/watch?v=Y24vK_QDLFg
Simple Made Easy: https://www.youtube.com/watch?v=SxdOUGdseq4
Hammock Driven Development: https://www.youtube.com/watch?v=f84n5oFoZBc
Sometimes you just want to button-mash through, rushing about carefree.
Other times, you want to go entirely stealth, wandering around, trying to find the best path, wasting an hour or more on a level you could have button-mashed in 5 minutes.
Both are fine.
See also "Why does the C++ standard ship every three years" (as opposed to ship when the desired features are ready):
https://news.ycombinator.com/item?id=20428703 (2019-07-13, 220 comments)
Great explanation for what I see when I mess around with coding LLMs. The natural human instinct of “this feels complicated, let me think about it some more” is suppressed. So far all the gains from the stunning initial speed have been cancelled out later in the project, arising from the over-engineered complexity baked into the code.
My ability to get this right is often a matter of how well I know the domain. If I don't know the domain as well I think I do, I fall into a lot of rework. If I know the domain more than I imagine then I waste my time with a baby step process when I could have run. All of this is a big judgement call, and I have "regrets" in both directions.
Don't fall prey to sunk cost fallacy. Just because you spent hours researching a PhD level topic doesn't mean you now have to use it in your project, if it's not quite the right application.
You start with a simple goal → then research → then keep expanding scope → and never ship.
The people who actually finish things do the opposite: lock scope early, ignore “better ideas”, ship v1.
Most projects don’t fail due to lack of ideas, they fail because they never converge.
I think the author is indeed someone who just really enjoys learning and doing all sorts of things, so the rabbitholing is part of the fun that tickles their brain.
The real problem is avoidance, when cuts are warranted and you don't want them, so you ... hide, often by working hard on something else.
The solution is to value your time. Most don't, so (self-) managers instead need to dangle other opportunities: finish this so you can do that. You can't take candy from a baby without trouble; instead, you trade for something else.
This resonates hard. LLMs enable true perfectionism, the ability to completely fulfil your vision for a project. This lets you add many features without burning out due to fatigue or boredom. However (as the author points out), most projects' original goal does not require these complementary features.
Maybe I lack imagination or curiosity, but it makes it difficult to come up with an idea and follow it through.
Prototype a minority of the time. Research a majority of the time. At some point the ratio flips as research fades out and producing increases.
Organizations and Conferences:
1. Insist on doing everything through “channels.” Never permit short-cuts to be taken in order to expedite decisions.
2. Make “speeches,” Talk as frequently as possible and at great length. Illustrate your “points” by long anecdotes and accounts of personal experiences.
3. When possible refer all matters to committees, for “further study and consideration”. Attempt to make the committees as large as possible – never less than five.
4. Bring up irrelevant issues as frequently as possible.
5. Haggle over precise wordings of communications, minutes, resolutions.
6. Refer back to matters decided upon at the last meeting and attempt to re-open the question of the advisability of that decision.
7. Advocate “caution.” Be “reasonable” and urge your fellow-conferees to be “reasonable” and avoid haste which might result in embarrassments or difficulties later on.
8. Be worried about the propriety of any decision – raise the question of whether such action as is contemplated lies within the jurisdiction of the group or whether it might conflict with the policy of some higher echelon.
Managers and Supervisors:
1. Demand written orders.
2. “Misunderstand” orders. Ask endless questions or engage in long correspondence about such orders. Quibble over them when you can.
3. Do everything possible to delay the delivery of orders. Even though parts of the order may be ready beforehand, don’t deliver it until its completely ready.
4. Don’t order new working materials until your current stocks have been virtually exhausted, so that the slightest delay in filling your order will mean a shutdown.
5. Order high-quality materials which are hard to get. If you don’t get them argue about it. Warn that inferior materials will mean inferior work.
6. In making work assignments, always sing out the unimportant jobs first. See that important jobs are assigned to inefficient workers with poor equipment.
7. Insist on perfect work in relatively unimportant products send back for refinishing those which have the least flaws. Approve other defective parts whose flaws are not visible to the naked eye.
8. Make mistakes in routing so that parts and materials will be sent to the wrong place in the plant.
9. When training new workers, give incomplete or misleading instructions.
10. To lower moral and with it production, be pleasant to inefficient workers; give them undeserved promotions. Discriminate against efficient workers; complain unjustly about their work.
11. Hold meetings when there is critical work to be done.
12. Multiply paperwork in plausible ways. Start duplicating files.
13. Multiply the procedures and clearances involved in issuing instructions, making payments, and so on. See that three people have to approve everything where one would do.
14. Apply all regulations to the last letter.
Kids these days just want to use prefab libraries and frameworks with a million dependencies doing god knows what and written by randos.
(Unrelated to how commenters these days just want an excuse to use the term "peanut smut".)
This technique is called out in the CIA simple field sabotage manual.
do you want to learn a new skill? do you want to scratch a very specific personal itch for just yourself? do you want to solve problems for others as well? do you want to build a startup/business around the idea?
all of these necessitate different approaches and strategies to research and coding. scratching an itch? maybe fully vibe coding is fine. want to learn? ditch the vibes and write by hand and ignore prior art. want to build a business? do some actual market research first and decide if this is something you actually want to pursue.
this post was a good reminder for me to identify the why as early on as possible and to be ok with just building something for myself without always having to monetize a side project which, for me, just zaps all joy from it.
Project where the sole user is you in your kitchen? Sure, hack it together.
Project where you actually want other people to use the product? A research phase matters and helps here.
Consider what the goal is and the amount of effort to invest typically becomes more evident.
Basically, you will end up dependent on the massive complexity of a compiler due to the syntax complexity, and the cherry on top, thanks to ISO, you'll get feature creep creating a cycle of planned obsolescence around 5 to 10 years.
Oh, sorry, "they" called that "innovation".