It takes time to derive the high-level from the codebase. I guess, your walk-throughs would be similar to how I'd explore it myself - without the mistakes.
EDIT it's documentation, so can get stale/out of sync as the codebase evolves (not a problem for PRs). Though high-level architecture/APIs rarely change.
BTW In github, I keep expecting the text next to files/dir to be a comment explaining the high-level (instead of the most recent commit message).
Trying to codify the “tour” that a developer would have to otherwise discover themselves, is exactly what I’m trying to help with. This just seems like such an important phase of learning, that is currently way too manual.
Regarding the comment about tours becoming stale: when you record the tour, you can choose to associate it with a specific commit/tag/branch. That way, when someone plays it back, it will continue to make sense, even in the midst of minor code changes/refactorings.
I’ve also been working on making the tour editing experience as simple as possible, so that as you need to revise the tour over time, it’s not too difficult to do.
That said, any artifact that’s a derivative/compliment of code (e.g. documentation, tests), represents an additional “burden” to maintain. So I’m focused on trying to keep the tours “stable” enough to support learning, and reduce the cost to editing them, to hopefully support the continued investment.
I’m semi-hopeful that there’s a nice balance here, that provides an enjoyable enough DX, coupled with the team’s motivation/benefits to retain and transfer such important knowledge. We’ll see how it goes!
I’d love to hear more about your thoughts on draft tours! Currently, you can record as many tours as you want per codebase, and so you could record one that was scoped to just key API calls, and have n-number of other ones with more detail, alternate flows.
Would that satisfy what you’re thinking? Or were you thinking about being able to record a tour, and mark certain steps as being more important than others? Any feedback here is unbelievably valuable!
I meant a way to make up-to-date drafts: base it on an automatic execution trace, like from a debugger running the code.
You get this 100% up-to-date trace for free. It's crazy verbose, so apply filtering (e.g. to only key apis) to make it manageable. Then edit that draft manually. Like an annotated, abbreviated, step-into debugger.
My thinking is that this makes it easier to get an initial draft. But maybe that basic trace is easy and natural to do manually?
Also, I'm not sure how similar this "execution trace" would be to a tour based on explaining it... I suppose you could find out, by examining your best tours to see how closely they actually follow execution order (if at all).
When I try to understand a codebase, I do trace calls manually, so there's probably some similarity.
Ah OK cool, apologies for misunderstanding you. Now that I’ve got the core record/playback experience, I’m keen to explore ways to simplify the authoring and maintenance experience even further. Enabling a “code profiler” for recording/updating tours could definitely be really useful. Thanks for the feedback!
No worries, think your interpretation of just api calls is a good one, a bit like "tests as documentation". Could then have another tour for each module (api implementation). This approach would tend to confine the effect of changes (as in Parnas' On the Criteria To Be Used in Decomposing Systems into Moduleshttps://www.win.tue.nl/~wstomv/edu/2ip30/references/criteria...)
It takes time to derive the high-level from the codebase. I guess, your walk-throughs would be similar to how I'd explore it myself - without the mistakes.
EDIT it's documentation, so can get stale/out of sync as the codebase evolves (not a problem for PRs). Though high-level architecture/APIs rarely change.
BTW In github, I keep expecting the text next to files/dir to be a comment explaining the high-level (instead of the most recent commit message).