This seems like a good way to flood the zone with superficial analysis by allowing writers to feign more experience and expertise on topic than they really bring to the table. It is hard enough to separate the wheat from the chaff as it is. Do we really want people depending on LLMs subject to occult biases to guide those with limited experience to produce work that will compete for precious and limited attention?
I understand the concern here. And as someone who prides himself on being able to write well, I too would hate to see the skill become devalued if the "zone gets flooded" as you say. However, the point I tried to convey was that there still has to be a human, and a creative one at that, that can take data-based "analysis" and turn it into something more worthwhile, and by engaging in the discourse we can prevent the chaff overcoming the wheat by focusing the argument and moving it forward.
My concern isn't for producers, it is for consumers. The problem isn't that quality writing would be devalued by the process you advocate, it is that poor analysis would be a little less obvious. To focus on what I'm trying to argue, your framing takes for granted that LLMs can produce quality analysis. Following from that assumption you argue that a human element is needed to refine and enhance, but I very much doubt the LLM is capable of producing anything with any inherent quality in the first place. Anything an LLM kicks out should be assumed incorrect until it is validated. LLMs excel most at producing content that has the superficial appearance of quality writing and analysis, but isn't. It can probably be useful in terms of summarizing data, but that assumes there is no political bias or buried conflict of interest, which is probably the rule, especially with Gemini.
If you're a skilled analyst on a given topic with baseline writing proficiency, it becomes clear that using an LLM isn't very helpful. The less competence you have, the less obvious when the LLM is producing garbage. You're marketing this technique to people with limited competence who won't be able to easily detect the analytical failures of the LLM produced content. I can see this being useful for people who want to pretend that they're doing analysis for projects that will never be reality tested, but not for anyone trying to produce meaningful analysis that is useful in the real world.
Finally, I think engaging in writing and analysis without relying on tools that seem to relieve us of some of the burden develops better thinking and analytical ability, which is critical for the military population. To focus on the argument as you say it isn't between using only LLMs vs LLMs with human revision and input. The argument is really over to what extent are LLMs useful towards analysis and development of analytical skill in the military population. I say they're usefulness is way more limited than you suggest by ignoring the considerable limitations of this technology.
This seems like a good way to flood the zone with superficial analysis by allowing writers to feign more experience and expertise on topic than they really bring to the table. It is hard enough to separate the wheat from the chaff as it is. Do we really want people depending on LLMs subject to occult biases to guide those with limited experience to produce work that will compete for precious and limited attention?
I understand the concern here. And as someone who prides himself on being able to write well, I too would hate to see the skill become devalued if the "zone gets flooded" as you say. However, the point I tried to convey was that there still has to be a human, and a creative one at that, that can take data-based "analysis" and turn it into something more worthwhile, and by engaging in the discourse we can prevent the chaff overcoming the wheat by focusing the argument and moving it forward.
My concern isn't for producers, it is for consumers. The problem isn't that quality writing would be devalued by the process you advocate, it is that poor analysis would be a little less obvious. To focus on what I'm trying to argue, your framing takes for granted that LLMs can produce quality analysis. Following from that assumption you argue that a human element is needed to refine and enhance, but I very much doubt the LLM is capable of producing anything with any inherent quality in the first place. Anything an LLM kicks out should be assumed incorrect until it is validated. LLMs excel most at producing content that has the superficial appearance of quality writing and analysis, but isn't. It can probably be useful in terms of summarizing data, but that assumes there is no political bias or buried conflict of interest, which is probably the rule, especially with Gemini.
If you're a skilled analyst on a given topic with baseline writing proficiency, it becomes clear that using an LLM isn't very helpful. The less competence you have, the less obvious when the LLM is producing garbage. You're marketing this technique to people with limited competence who won't be able to easily detect the analytical failures of the LLM produced content. I can see this being useful for people who want to pretend that they're doing analysis for projects that will never be reality tested, but not for anyone trying to produce meaningful analysis that is useful in the real world.
Finally, I think engaging in writing and analysis without relying on tools that seem to relieve us of some of the burden develops better thinking and analytical ability, which is critical for the military population. To focus on the argument as you say it isn't between using only LLMs vs LLMs with human revision and input. The argument is really over to what extent are LLMs useful towards analysis and development of analytical skill in the military population. I say they're usefulness is way more limited than you suggest by ignoring the considerable limitations of this technology.