2 Comments
User's avatar
Rainbow Roxy's avatar

Excellent analysis; it really makes me wonder if this drift toward a statistical average is an inherent limitation of current LLM architectures or something that could be mitigated with more persoanlized fine-tuning.

Peter Benson's avatar

Right now it looks like an inherent limitation based on the training data and how the systems have been trained. As the systems are ubiquitous, i.e. same model used for all users, the responses are likely to converge over time. While there has been a level of ‘randomness’ built in to allow for differentiating answers, the ‘generic’ nature of the GenAI model itself is a limiting factor. A level of deviation is possible with fine tuning and memory preferences over time, however I have yet to see any benchmarking that shows the spread of diversity reaching anywhere near human values.