In a lot of cases, picking the best starting point is valuable, even if it is far off. This is true of problem solving in general, including engineering, systems / software architecture / development, social engineering (marketing, legal, etc.), science, and art.
A problem is by definition something not yet solved, or not yet solved by a particular person or group. Estimating the best starting points, even if they are sometimes wrong or not optimal, is still usually better than starting with a blank page and first principles sources.
But, as you point out, this kind of rough starting point is not always directly usable as a result. Sometimes OK for fuzzy uses, not ever usable for important & dangerous cases, and to various degrees in general.
It's one thing to say you'll build systems where LLMs will generate starting points, but I don't hear a lot of that (yet?). At the moment it feels like the optimists are envisioning LLMs doing everything independently.
In a lot of cases, picking the best starting point is valuable, even if it is far off. This is true of problem solving in general, including engineering, systems / software architecture / development, social engineering (marketing, legal, etc.), science, and art.
A problem is by definition something not yet solved, or not yet solved by a particular person or group. Estimating the best starting points, even if they are sometimes wrong or not optimal, is still usually better than starting with a blank page and first principles sources.
But, as you point out, this kind of rough starting point is not always directly usable as a result. Sometimes OK for fuzzy uses, not ever usable for important & dangerous cases, and to various degrees in general.
It's one thing to say you'll build systems where LLMs will generate starting points, but I don't hear a lot of that (yet?). At the moment it feels like the optimists are envisioning LLMs doing everything independently.