For people who are interested in this, there is still a lot of very interesting research being done in real-time systems. I don't think in garbage collection in particular, though. Rather, the interesting stuff is bringing real-time systems into the multicore era. Lots of challenging problems.
Note that "real-time" here is used in the sense of embedded sytems, not in the sense of "real-time search."
Yes. The research and technology certainly exists in academia, though I don't think it has really been commercialized. Whether it's actually going to be better in practice is going to depend on your workload and hardware, but RMA on SMP with migration (i.e. not separating the cores) definitely stands a chance of being significantly better than not allowing migration.
All that said, when you talk about RMA (rate-monotonic analysis), you are talking about simply assigning static priorities to each task. So if a task with priority 1 is available, it will always take precedence over a task with priority 2, for instance. (Equivalently and without loss of generality, a task with a higher rate will take precedence over a task with a lower rate.) There is a (potentially) vastly better way to do it, which is with dynamic (instead of static) priorities. Most of the intetesing academic research for multicore real-time is focused on dynamic priorities. Again, that stuff certainly can be much better in practice, and is quite well-studied in academia, but mostly hasn't been commercialized, as far as I know. That's because, for real-time systems, you really want to do the dumb obvious thing unless you have no choice. I think we are increasingly going to hit the "you have no choice" threshhold as we try to push the boundaries of what we can do while (often) simultaneously scaling down hardware. A "really really smart drone" would be a good example of something that would hit that threshhold. Maybe even a "really smart car."
I think the more likely near-term result of a really smart car is that they do a single-core RMA for the mission critical systems and include a failsafe for when the non-hard-real-time AI system fails.
So far it's been cheaper to just increase the compute resources than to move from static to dynamic priorities. FAA mission critical certification (as an example) consumes a lot of engineer-months, and I expect automotive to start to move a few steps closer to that model in the wake of the Toyota issue.
> I think the more likely near-term result of a really smart car is that they do a single-core RMA for the mission critical systems and include a failsafe for when the non-hard-real-time AI system fails.
I agree with this. The question is what happens when you want to have features that require real-time AI. If we have cars driving themselves, they are (potentially) going to have to be able to make very smart decisions in a provably bounded amount of time.
> FAA mission critical certification (as an example) consumes a lot of engineer-months, and I expect automotive to start to move a few steps closer to that model in the wake of the Toyota issue.
In the long run, we may get a certified RTOS that supports dynamic priorities. At that point, dynamic prioties may not be an obstacle to assurance/certification.
For people who are interested in this, there is still a lot of very interesting research being done in real-time systems. I don't think in garbage collection in particular, though. Rather, the interesting stuff is bringing real-time systems into the multicore era. Lots of challenging problems.
Note that "real-time" here is used in the sense of embedded sytems, not in the sense of "real-time search."