Hacker Newsnew | past | comments | ask | show | jobs | submit | Grumbledour's commentslogin

I think they just confused bavria with munich. It was the city who had their own linux distribution (LiMux) and the move of the headquarters where part of microsoft efforts to change that, because it means more tax income for the city (Munich is not part of the disctrict Landkreis München).


It's "Rattenfänger von Hameln" in german, so the literal translation would be "Rat-Catcher of Hamelin".

I do remember him wearing brightly colored patchwork clothing in the stories, but I could not say if that was an integral part of the original fable or just added in retellings to make the character stand out more as a mysterious stranger.


Here is a picture on Wikipedia. https://de.wikipedia.org/wiki/Rattenf%C3%A4nger_von_Hameln#/...

I grew up around Hameln and can confirm, that is how he is depicted.

Also a depiction of him from 1592: https://de.wikipedia.org/wiki/Rattenf%C3%A4nger_von_Hameln#/...

So it is part of the fable.


Was that kind of garb common back then? Reminds me of Swiss guard uniforms(granted, developed in the early 20th c, but based off 16th century imagery)


Not sure, the costume reminds me of a jester. If I'd take a jab at it, here is the original transcription from Brüder Grimm:

"Im Jahr 1284 ließ sich zu Hameln ein wunderlicher Mann sehen. Er hatte einen Rock von vielfarbigem, bunten Tuch an, weshalben er Bundting soll geheißen haben, und gab sich für einen Rattenfänger aus…"[0][1]

"In the year 1284, a strange man appeared in Hameln. He had a skirt, made from differently colored fabrics, which is why his name was 'bundle(?)', pretending to be a rat catcher…"

[0] https://de.wikisource.org/wiki/Seite:Deutsche_Sagen_(Grimm)_... [1] https://de.wikisource.org/wiki/Seite:Deutsche_Sagen_(Grimm)_...


Someone needs to work on the wiki etymology.. (not saying it's wrong, just that there seems to be a story missing there that connects 1 & 2 :)

https://en.wiktionary.org/wiki/bunting#English


It would not surprise me. Clothing took a lot of labor to make. A large part of the labor was women's labor which history doesn't record much of. When you are doing that much effort it isn't that much more to die in bright colors, and people like colorful clothing (some like the Amish make non-color part of their identity of course, but they like colors they are just rejecting them anyway because they think that helps them get to heaven). Colors were limited to what they could make so probably not as bright as modern, but not dark in general.


> A large part of the labor was women's labor which history doesn't record much of

Women spent much of their lives making textiles. It likely wasn't recorded much because it was so ubiquitous.

For example, my family photographs when I was growing up were nearly all about documenting unusual events, like birthdays, holidays, and vacations. The humdrum ordinary things were not photographed. For example, there was only two photos with the family car incidentally in the frame. No photographs of the neighborhood. One photo of the school I attended. No pictures of my dad at work. No pictures of my mom cleaning the house. And so on.

It gives a fairly skewed vision of life then.


That too, but we know more about men's work that was just as ubiquitous. Though the vast majority of history is about those in charge - the 0.0001%.


> the 0.0001%

The ones who can read and write expect to be paid, and the wealthy and powerful will commission them to write about what interests the wealthy and powerful - i.e. themselves.

This state of affairs persisted until the advent of the printing press, which made for a mass market of ordinary people.


Bright colors fade more noticeably over time, so bright clothing was a good indication of new, regularly replaced clothing. The dyes themselves could also be phenomenally expensive. The scarlet red of Catholic cardinals was historically made from Kermes, an especially lightfast dye. Kermes was in turn a cheaper alternative to the Tyrian Purple worn previously.

Daily clothing would have been more pastel than the saturated colors we associate with "colorful" today.


The question is of course always where someone draws the line, and thats part of the problem.

Too many people have the "Premature optimization is the root of all evil" quote internalized to a degree they won't even think about any criticisms or suggestions.

And while they might be right concerning small stuff, this often piles up and in the end, because you choose several times not to optimize, your technology choices and architecture decisions add up to a bloated mess anyway that can't be salvaged.

Like, when you choose a web framework for a desktop app, install size, memory footprint, slower performance etc. might not matter looked at individually, but in the end it all might easily add up and your solution might just suck without much benefit to you. Pragmatism seems to be the hardest to learn for most developers and so many solutions get blown out of proportion instantly.


> Too many people have the "Premature optimization is the root of all evil" quote internalized to a degree they won't even think about any criticisms or suggestions.

Yeah I find it frustrating how many people interpret that quote as "don't bother optimizing your software". Here's the quote in context from the paper it comes from:

> Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.

> Yet we should not pass up our opportunities in that critical 3 %. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified.

Knuth isn't saying "don't bother optimizing", he's saying "don't bother optimizing before you profile your code". These are two very different points.


I like Knuth and think he’s a great writer, but this particular paper [1] is… hard to read. Almost as if it is an unedited stream of consciousness rather than something he intended to be published.

Reading the section you are quoting from (as well as the section of the conclusion dealing with efficiency), I think it should be clear that in the context of this paper, “optimization” means performance enhancements that render the program incomprehensible and unmaintainable. This is so far removed from what anyone in the last 30+ years thinks of when they read the word “optimization” that we are probably better off pretending that this paper was never written. And smacking anyone that quotes it.

[1] https://dl.acm.org/doi/10.1145/356635.356640


I actually think it's a lovely paper (and he obviously intended to publish it, and put a lot of effort into compiling and editing it) and illustrates the nature of his writing very well: he's managed to be encyclopedic about all the topics he chose to discuss, while still having it be very personal (the matter at stake is one of programmers' style and preferences after all). This blog post (https://blog.plover.com/prog/Hoare-logic.html) calls it “my single all-time favorite computer science paper” and here's a recent HN thread with at least two others agreeing it's a great paper: https://news.ycombinator.com/item?id=44416265

I've posted a better scan here: https://shreevatsa.net/tmp/2025-06/DEK-P67-Structured.progra...


I'm old.

My boss (and mentor) from 25 years ago told me to think of the problems I was solving with a 3-step path:

1. Get a solution working

2. Make the solution correct

3. Make the solution efficient

Most importantly, he emphasizes that the work must be done in that order. I've taken that everywhere with me.

I think one of the problems is that quite often, due to business pressure to ship, step 3 is simply skipped. Often, software is shipped half-way through step 2 -- software that is at best partially correct.

The pushes the problem down to the user, who might be building a system around the shipped code. This compounds the problem of software bloat, as all the gaps have to be bridged.


> Make It Work

> Make It Right

> Make It Fast

https://wiki.c2.com/?MakeItWorkMakeItRightMakeItFast


I've never interpreted "Premature optimization..." to mean don't think about performance, just that you don't have to actually implement mechanisms to increase performance until you actually have requirements to do so - you should always ask of a design "how could I make this perform better if I had to".


To me, it rather meant: "Ultrahard" optimization is perfectly fine and a good idea, but not before it has become clear that the requirements won't change anymore (because highly optimized code is very often much harder to change to include additional requirements).

Any different interpretation in my opinion leads to slow, overbloated software.


Yeah - I've heard that described as "It's easier to make working things fast than fast thing work" - or something like that.


It is forever baffling to me that so many devs don’t seem to appreciate that small performance issues compound, especially when they’re in a hot path, and have dependent calls.

Databases in particular, since that’s my job. “This query runs in 2 msec, it’s fast enough.” OK, but it gets called 10x per flow because the ORM is absurdly stupid; if you cut it down by 500 microseconds, you’d save 5 msec. Or if you’d make the ORM behave, you could save 18 msec, plus the RTT for each query you neglected to account for.


I've found that mentioning bloat is the fastest way to turn a technical conversation hostile.

Do we need a dozen components of half a million lines each maintained by a separate team for the hotdesk reservation page? I'm not sure, but I'm definitely not willing to endure the conversation that would follow from asking.


What I once said to a less experienced developer in a code review is:

> Don't write stupid slow code

The context was that they wrote a double-lookup in a dictionary, and I was encouraging them to get into the habit of only doing a single lookup.

Naively, one could argue that I was proposing a premature optimization; but the point was that we should develop habits where we choose the more efficient route when it adds no cost to our workflow and keeps code just as readable.


And to add to that, because some people might not know or have forgotten, colors where easily adjustable in winforms, so dark mode, high contrast mode, green, blue, hot pink etc. were all easily adjustable for all these apps and back in th day that was pretty standard to do for visually impaired people. No extra work from programmers was necessary, so vastly superior to today where you have to beg for good dark mode support.


"Dark mode" today is the biggest scam - selling us a neutered form of color schemes that used to be a standard feature of any UI toolkit as something new and exciting.


It's quite interesting, that while this topic comes up from time to time (it has been going on a long time after all already), people on here seem to seldom talk about the organizations that lobby for this proposal for years, with big ties to the US and intelligence agencies. So this is by no means just an European phenomenon but there seems to be a much bigger agenda behind it all.

Now, at the moment, I don't have a good english language source, but I am sure someone else could provide one?

Here is a german language one[0], from netzpolitik.org, who follow chat control for years now an have many articles going in depth about this. I am sure you could use translation software to read it until someone provides a better source. (And if you have not heard of this, you should!)

And while someone already linked to patrick breyers website[1] which has a good overview, I do so again so maybe more people will see it. This thing is not new, but it is also not easily ignored and everyone should be informed whats going on here. They will try to pass this again and again since they have done for years now and it's mostly been close calls until now.

[0] https://netzpolitik.org/2023/anlasslose-massenueberwachung-r...

[1] https://www.patrick-breyer.de/en/posts/chat-control/


This is a good time to remember the short span when Firefox actually would show an RSS Symbol in the address bar after it found a feed on the page, allowing you to click on it to view it and subscribe. I thought, the future of RSS was bright back then. They removed it shortly after.

That they removed even the formatted feed view a few years back was just an insult!

But they could also never manage to cash in on microformats. So much potential there.


And live bookmarks. This was the healthy way to read news from multiple sources before social media.


Live bookmarks was how I started, and what I've come back to after all, using this extension: https://addons.mozilla.org/en-US/firefox/addon/livemarks/

When I'm familiar with the source, the headlines are enough for me to know if I want to read, navigating the folders is super quick, and the feed indicator makes adding new ones very easy too.


Nothing was cooler than having a Feedburner bar on your website with a number showing how many subscribers you had.


I am so conflicted about this project every time it comes up.

I think I understood for quite some time what it wants to do (Though when checking the website there always creeps in doubt, because it is so incomprehensible) and every year when I download the application again, it looks a bit more cleaner, a bit easier to just use. But still, basic things always elude me. Do I really have to read the handbook to figure out how to format text in the knowledge base? Half the windows and symbols just make no sense, etc. Try pressing a button to see what it does and now everything looks different and what even happened?

It seems to glacially improve on that front and I know to really use it, I have to learn to program it, but I am also of the mind basic functionality should be self explanatory. And pharo itself as the basis of this seems so convoluted and complex, I wonder if I even want to get into this.

And then, the community seems to solely be on discord still, and that is then always the point were I bow out and wonder if cuis smalltalk or other systems with simplicity as core tenant are not much nicer to use and I should look there. Of course, in the end, I never get more than surface deep into smalltalk, because while I want the tools to build my own environment, if I need to build them first, there is always more pressing work...

But honestly, a great knowledge base and data visualization I can intuitively just use and then expand later on with my own programs sounds like a dream workspace. It's just, that it is really hard to get into at the moment. I don't know any python, but I could just use jupyter know and learn as I go, but sadly, I never get that feeling here.


I'm basically in the same boat with this and all the smalltalk systems I have tried. The environment is just so foreign. I get the gist for how programming works in pharo (have also looked at Squeak and Cuis), but Python just seems a lot more natural. It is also hard to find snippets of useful code on stack overflow for smalltalk for the things I want to do. Maybe copilot is better there. The more practical problem is I'd never be able to justify using any of this for corporate work.


Even worse there are groups of people who keep praising it and keep us curious through these years. Yet none of remarkable applications built with it except the tool itself.


There were lots of applications written on enterprises, hence why Gang of Four book used Smalltalk and C++, not Java as many think when bashing the book ideas.

That would come later and take the air out of Smalltalk business adoption as IBM and others pivoted away from Smalltalk into Java.

It is no coincidence that while Java has a C++ like syntax, its runtime semantics, the ways how JVM is designed and related dynamism, Eclipse, key frameworks on the ecosystem, and by extension the CLR, all trace back to Smalltalk environment.


I don't know about the design of JVM, but as far as object model and semantics, Java looks a great deal more like Simula than it does like Smalltalk.


Java was originally designed to be Objective-C with C++ syntax.

https://cs.gmu.edu/~sean/stuff/java-objc.html

JavaEE was born out of an Objective-C framework Distributed Objects Everywhere, from OpenStep collaboration between Sun and NeXT,

https://en.wikipedia.org/wiki/Distributed_Objects_Everywhere

Already there we have the Smalltalk linage that heavily influenced Objective-C design in first place.

Then how Smalltalk => Self => Strongtalk linage ended up becoming HotSpot JIT compiler on the JVM.

https://www.strongtalk.org

Finally, reading the Smalltalk-80 implementation books from Xerox PARC also shows several touch points between the designs of both VMs.

The way classes are dynamically loaded, introspection mechanisms, code reloading capabilities, sending code across the network (RMI), jar files with metadata (aka bundles), dynamic instrumentation (JMX).


Smalltalk-style tooling is mostly used in small businesses selling to either small or niche businesses. Things like custom ERP in manufacturing, custom CRM in insurance. Some military projects have also been done.

The Pharo folks insist on trying to adapt to industry, and that's also the focus of a lot of their published material, though there's still an academic legacy in there.

For me the tricky thing is to find enough time to study the API:s, the programming language is easy to learn the basics of and then one has to figure out the flow in creating a package and adding code to it, but then it's a slog searching and reading the code for hundreds of classes to figure out where the thing one wants resides. On the other hand, when things break it's usually a breeze, the inspection and debugging is great, unless one does the mistake of redefining some core class and corrupts the entire image.


> unless one does the mistake of redefining some core class and corrupts the entire image

In which case, one does not save that image ;-)

Or if one chose to save a broken image, one goes back to a previous image and reloads all the more recent logged changes up-to and excluding the change that broke things.

https://cuis-smalltalk.github.io/TheCuisBook/The-Change-Log....


Right, so that was about me having changed something in a Pharo image that was fundamental to the system as such, hence the menus and keyboard shortcuts stopped working. I had to kill -9 from the operating system, and at that point debugging whatever I was debugging didn't feel like a breeze.

But it should be very rare, and besides the change log and similar facilities it's also easy to just make timestamped copies of the image and pushing packages to git.


And that was about me being tired of the bogus Smalltalk's so fragile meme.

So how would we explore fundamental changes that break the debugger? Would the dumbest workable thing be create subclasses without changing the originals?

I have come across someone who genuinely seemed to think that making copies of just the image was a viable approach to version control. Their project failed badly; and they were absolutely convinced the problem had been Smalltalk, when the problem was not understanding how they could use Smalltalk.


I didn't say it was fragile.


> corrupts the entire image

Epitome of fragility.

A different story if you'd gone-on to explain recovery.


Maybe find someone who actually thinks this environment is fragile and bring this hang-up there?


Yeah this really exposes how empty and vapid the praise and criticism you see of stuff here is. Of course there are some people here who are well known to be substantial in their experience, but sooo many people clearly are not. The vast majority are just superficial and you can and should ignore them.


For my part, I've never used it in anger. But I like to praise it because it represents a hint of what I wish my tooling were like, and would like to see this concept of moldable development spread outside of the Pharo community. Not just in terms of using it for other languages, but also building it on top of other languages

There are a lot of things I like about Smalltalk, but the parent poster is right, Python is a more practical choice. Not really because it's better-known so much as because it's procedural. Smalltalk is so all-in on object-oriented programming that it puts me in the wrong mental space for just banging out throwaway code for getting a question answered quickly. Instead I'm constantly being pulled toward all this "clean architecture clean code" thinking because that's just such a big factor in my headspace for object-oriented programming. Even if I don't succumb to it, it's still a drain on my executive function.

And then yes, agreed, building on Pharo's UI system is a problem. That's frankly something that the Smalltalk community needs to get away from across the board. It's just too clunky by modern standards. And it would take a lot to convince me to agree to adopting a Pharo-based tool like this at work, out of fear that all the non-native UI stuff would become a major source of accessibility barriers. And I don't quite understand why the Pharo community seems to be so convinced that it's a necessary part of the image-based development paradigm, when lisp has been doing images without tight coupling to a single baked-in graphical IDE for decades.

I keep thinking maybe all it needs to be is something like an extension (or alternative) to the language server protocol for exposing some of the intermediate code analysis artifacts back to the developer. And then I can happily bang on that from a Jupyter notebook.


> But I like to praise it because it represents a hint of what I wish my tooling were like, and would like to see this concept of moldable development spread outside of the Pharo community.

You might like this talk! "Liberating the Smalltalk lurking in C and Unix - Stephen Kell" https://news.ycombinator.com/item?id=33130701


The absolute majority of Pharo code I've written has been quite procedural and throwaway in character. It's a tool I pull up for a bit of exotic exploratory programming against some remote API or file, typically just a 'script' in whatever the window is called.

In part because it's much easier to boot a fresh image and start hacking than some python3 -m venv incantation that sometimes breaks or breaks something else. There's a lack of libraries though, and now it might be easy to just point the image to a remote git repo to import it but I'm not sure, if it isn't other languages has it easier. At least when you can just copy the algorithm into a file and put the right formula at the top and start using it.


A small correction, though.

Python looks procedural, however since the new object model was introduced in 2.2 with the original approach removed in 3.0, that it is OOP to its bones, even when people don't realize it.


It is, but it still has really good ergonomics for a procedural-first development style, and in many respects that still feels more natural in Python than a fully object-oriented style.

By contrast, Smalltalk is so deeply object-oriented that it doesn't technically even have an if statement, just instance methods in the boolean class.


And the vast majority of the time, we just pretend it's a weird syntax if-statement.

The practical importance is that we use the same tools to search for implementers and senders of #ifTrue: as we use to search for implementers and senders of any other method. (We use the same pencil to sketch that we use to write.)


> Smalltalk is so all-in on object-oriented programming that it puts me in the wrong mental space for just banging out throwaway code for getting a question answered quickly.

otoh I can see what you mean.

otoh I can see someone start "banging out throwaway code" and testing it in less than 2 minutes.

https://www.youtube.com/watch?v=mXoxfmcDDJM


In the meantime, you can use Lepiter pages and program Python from there and inspect Python objects with inspector views defined either in Python or in Pharo :).


Are you referring to Smalltalk here or Glamorous Toolkit? :)


Well, I have seen reasonably important systems being written in Smalltalk, but these were not advertised too publicly because... well, they were considered a competitive advantage :).


What proportion are legacy systems and what proportion are green field?


Few green field, but that's not because of the technology merits.


FWIW, this project is promoting the ideas behind it as much as its own implementation. Personally, I'm a strong proponent of the underlying concept of "moldable development"; in fact, I think this isn't going far enough[0].

As for:

> Yet none of remarkable applications built with it except the tool itself.

The same is true of Smalltalk in general, and of Lisp, and some other technologies. Lack of wide adoption and large amount of success stories is, alone, not a proof the idea/technology is fundamentally bad. The choices in our industry are strongly path-dependent, driven primarily by popularity contests and first-mover advantage. This dynamic is famously known as "Worse is Better"[5].

What the original essays didn't account for, however, is that whatever gets moderately successful today, becomes a building block for more software tomorrow. As we stack up layers of software, the lower layers become almost completely frozen in place (changing them risks breaking too many things). "Worse is Better" sounds fine on the surface, but when you start stacking layers of "worse" on top of each other, you get the sorry state of modern software :).

So yeah, those ideas may not fit the industry today, but it's worth keeping them in mind as a reference, and try to move towards them, however slowly, to try and gradually improve how software is made.

---

[0] - I write about that regularly; look up my comments with the phrase plaintext "single source of truth"[1] for some entry points, like [2] or [3].

TL;DR: use of such "contextual tools" should become the way we build software. We need to have environments that come packed with various interactive "lenses" through which you can view and edit the common codebase using whatever representation and modality (textual, graphical, or something else) is most convenient for the thing you're doing this minute, switching and combining them as needed[4]. What we consider a codebase now - plaintext files we collaborate on directly - needs to evolve towards being a serialization format, and we shouldn't need to look at it directly, not any more often than today we look at binary object files compilers spit out.

[1] - https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

[2] - https://news.ycombinator.com/item?id=42778917

[3] - https://news.ycombinator.com/item?id=39428928

[4] - And saving the combinations and creating new tools easily, too. Perspectives you use to understand and shape your project are as much a part of it as the deployable result itself.

[5] - https://en.wikipedia.org/wiki/Worse_is_better


I am glad you find Moldable Development interesting as a concept.

> in fact, I think this isn't going far enough I would be more than curious to learn more about how you see this space :)


The GT/Pharo technology may be the best, but due diligence always reports that the real problem why neither GT nor Pharo get large sources of funding is that the community is conflictive and full of people with poor human qualities.


You raise a few points.

I am not sure what you mean by "people with poor human qualities". I am particularly involved in GT and less in Pharo since many years. GT is based on Pharo, but it comes with its own environment and philosophy to support a goal that is not about Smalltalk. We do encourage you to join and see our community especially if you are interested to learn a different kind of programming with us. I am yet to find people with poor human qualities there.

As for funding, we sustain GT through the work we do at feenk where we solve hard problems in organizations that depend on software systems.

And we do not claim that GT is the best. Only that it's the first to show a different possibility to how systems can be made explainable.


I did spend a day or two on the discord a few years ago, and the conversation was dominated by some conflict over a refector...

What due diligence are you talking about? Personal research? (Genuine curiosity)


Was this on the GT or Pharo Discord? They are not the same communities or systems :)


I see what you're saying, but looking at the video, which shows playgrounds and notes, I'm quite excited to try this because it looks a lot like jupyterlab. Jupyterlab is familiar to any data scientist, but while it's easy to use, it's quite awkward to extend due to the latter being based on a plugin system (understandably) based on typescript.

Here it's all one system, and thinking of the image as a key-value store feels quite natural too. Finally, the UI with panes that go right also feels natural and looks quite slick. I wonder if it's easy to switch between languages? Like can the key-value store pass data to a python program, or use an Apache arrow table?


Thanks for excitement :)

A few notes: the moving from left to right allows for a dynamic exploration which is different from the typical defined exploration from a notebook. In Glamorous Toolkit we consider that both are important and complementary.

The dynamic exploration is enabled by the tools following the context. For example, the views in the inspector appear when you get to an object that has those views. You do not call these views by name. Also, choosing a different view allows you to change the course of the exploration midstream. Furthermore, you can create a view right in place, too.

The exploration possibilities are visible, but there are more pieces that are less visible that make the environment interesting. For example, there is a whole language workbench underneath and a highly flexible editor that can also be contextualized programmatically.

If you do give it a try, please let us know what you think of it.


Glamorous Toolkit is built in Smalltalk, but it is not intended for you to build systems using Smalltalk. It is a technology for building development environments for your systems. That's not quite the same. Oh, and we use it in corporate settings just fine, too :).


Thank you for the describing the interest.

You do not have to read the handbook to format the text. You can use Markdown in a text snippet :). This gives you a compressed overview: https://book.gtoolkit.com/understanding-lepiter-in-7--6n7q1o...

> I know to really use it, I have to learn to program it, but I am also of the mind basic functionality should be self explanatory. And pharo itself as the basis of this seems so convoluted and complex...

We use Pharo as a programming language for building the system, and most extensions are expected to be written in it. It's possible to connect to other runtimes, like Python or JS, and extend the object inspector that works with remote objects using those languages. But overall, learning Pharo is a bit of a prerequisite. I certainly understand that it can appear foreign, but convoluted and complex are not an attributes I would associate with it :).

Now, in GT, the environment is built from the ground up anew and it's different from classic interfaces found in Pharo or Cuis. And of course, it's different from typical development environment, too, because we wanted to build a different kind of interface in which visualization is a first class entity.

Our community is indeed on Discord a lot, but we also host discussions on our GitHub repository: https://github.com/feenkcom/gtoolkit/discussions

In any case, I am happy you find the need for "a great knowledge base and data visualization" relevant and useful.


I’m in the same boat, I really like the idea of it but the actual use of it eludes me. It’s like there’s a cultural gap, even when they’re talking about the practical applications of the system it’s incomprehensible; I eventually came to the conclusion that they are doing a lot of work to deal with situations that I have never even heard of people being in


Interesting observation.

The problem we address is how to understand a situation about a software system without relying on reading code.

Reading code is the most expensive activity today in software engineering. We read code because we want to understand enough to decide how to change the system. The activity is actually decision making. Reading is just the way we extract information from the system. Where there is a way, there can be others.

In our case, we say that for every question we have about the system it's possible to create tools that give you the answer. Perhaps a question is why not use standard tools? As systems are highly contextual, we cannot predict the specific problems so standard tools will always require extra manual work. The alternative is to build the tools during development, after you know the problem. For this to be practical, the creation of the tools must be inexpensive, and this is what GT shows to be possible.

This might sound theoretical, but it turns out that we can apply it to various kinds of problems: browsing and making sense of an API that has no documentation, finding dependencies in code, applying transformations over a larger code base, exploring observability logs and so on.

Does this help in any way?


I’m still trying to understand this. Could it be that these are much larger codebases than I’m used to? I actually haven’t seen software analysis of codebases provide much value


Ok, do you write tests? If you do, you are already employing analyses :).

The interesting bit about a test is that it's inexpensive to create and you can create it within the context of your system after you know the problem. You do not download it from the web. You create it in context and then, when even a single test fails you might stop and fix that one. Why? Because it reveals something you consider valuable.

Now, tests answer functional questions, but the same idea can be applied to any other kinds of analyses. The key is to have them created within the system. If you download an analysis from the web, they will be solving some problem, just not yours, so it will not look interesting.


Thank you for putting into words your frustrations with trying to grok GT and Pharo, which matches mine. It's too bad because I can sense the fascinating technologies and the possibilities of a great developer experience that are there, but there is a lot of tribal and historic knowledge surrounding smalltalk that can be quite impenetrable.

I have been thinking about my own experience trying to learn Pharo and GT and came to the conclusion that, because of the nature of smalltalk, written form of teaching materials are not effective and in fact even painful to learn from. Nothing wrong with the smalltalk approach of computing, such as GUI-centric and image-based environment. They are what makes it so interesting and an immersive development environment. But video tutorials and live-session hand-holding are what's needed to teach these environments because of the highly interactive nature of smalltalk. The Pharo MOOC exists, but that requires the type of academic-level time and mental commitment of back when I was in school. And as a hobbyist, I have less-demanding options for learning that are also interesting so I end up pausing my efforts to learn Pharo/GT.

It's a tough situation for smalltalk proponents because interactive instruction material are very costly to produce and maintain. And the smalltalk communities are much smaller and they have don't massive corporate sponsors. Even cheaply-made YouTube videos take time and effort, and I am grateful for those who make them out of their enthusiasm for the technology!. But I'm afraid I've been conditioned to watch slick, engaging video content with clear, well-paced voice tracks and accurate captioning.

I do wonder if the smalltalk community could benefit from a beginner-friendly, simplified version of Pharo UI that starts up in a Jupyter notebook interface and expose only limited tooling, to give the learner a taste of what's possible and has some guardrails to prevent the user getting lost. Gradually revealing the Pharo/GT features that way would keep the learner engaged and motivated. Because of the above-mentioned challenges with producing teaching content, self-guided interactive learning tools would be the best bang-for-buck, I think. I thought the Elixir language manual was excellent and it was the first language reference doc I actually enjoyed reading! (Until it got to the string handling... then I ran out of attention span, lol) Elixir also have Livebook.dev which gives notebook interface. Could be a good inspiration.

Another possibly dumb idea I had was that maybe smalltalk is an ideal companion to current LLM tool/function calling APIs, where an LLM can "guide" a live smalltalk environment for developing an application through an API. Since a smalltalk environment is always running, it can also (maybe) feed relevant live state context back to the LLM with some feedback prompts... I suppose a smalltalk envrion can serve as a sort of memory for LLM as well as an agent for modifying the smalltalk environ?

Sorry, didn't mean for this to sound like "you must do this for free for my mild interest in your passion project!" This has been more of a stream-of-consciousness spillage onto this forum because Grumbledour's excellent comment resonated with me. :) And the mention of notebook interface clicked in my head.

Anyway, sorry for ranting, and thank you GT/Pahro team for making something fascinating! Stuff like this is what keeps me in the technology field instead of totally leaving it out of frustration with the where tech meets business!


> I have to learn to program it, but I am also of the mind basic functionality should be self explanatory.

It explains it right on the site: "To learn how to program it, first learn how to learn inside the environment." /s


This is such a bad and reductionist take.

When progressive enhancement was the thing, nobody wanted to "disable 1/3 to 2/3 of their browser's technology", browsers mostly lacked many of modern CSS and JS enhancements across the board and having them fail graciously while still using modern features for the few who would support it was just the professional thing to do.

It sill is today of course, but it is obvious there are not many professionals left in webdev.


But, at least speaking for Germany here, the anti-technology stance of politicians and Germans in general certainly has a huge part in hampering the startup scene. This always goes both ways. It's not wrong to be skeptical and wanting privacy etc., but it also means it's harder to make bank by tracking everyone and selling their data, which has been a huge part of American technology startups for over a decade. And it often also just means, people don't trust new things and will not try them. Not sure this applies equally to the Netherlands though.


The Dutch people are early adopters. But our startups generally think way too small.


It also gives you something hacker news mostly lacks (besides the occasional Ask HN) and that is discussing topics organically vs. just discussing what people elsewhere have said. I think this is also an important distinction older type of forum software have before the newer "link share" type.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: