Limitations of AI : The Raw And Dirty low res version
Автор: RockMeKiernan (formerly UDIORockMeAmadeus)
Загружено: 2025-11-27
Просмотров: 25
As I talk about things, I have a tendency to start brainstorming and the brainstorm gets condensed into ideas and interrelated ideas just understand that the information in my brain comes from me having watched documentaries. It's from experiences that I've had, but for whatever reason the information could be erroneous or it could be new information as a result of me bringing together ideas. In that moment I know I know I know my mom was opening the door to let me know that we're going to be going pretty soon to a relatives Thanksgiving party and she wanted to make sure that I wasn't gonna get involved in something that was going to be so you didn't hear that but I mean I'm sitting here doing voice to text conversion so certain things come up and you can see how this would come in to the training of an AI if I was pulling documents that included included content that was out of context that isn't irrelevant to a description of a video in this case which is what I'm doing and I'll put this in parentheses I may not even include this in the description but the idea is that AI lacks the detail that we don't talk about it derives answer answers based upon the prompts we give, but it may not understand the complete context in what we're asking or we're talking about and that's the reason why you need to interact with the AI you can't just take its answers on face value other things I talk about in this video is why you don't want to use AI as a teaching tool for children because it's squinting include only the stuff that has been explicit explicitly stated and it's not going to reflect accurately certain fields of knowledge that haven't been condensed well enough another thing that can occur with AI is that the information that it's trained on is stored in neurons or neurological representations of information which are more kin to quadratic equations for instance being used to curve fit details of information and extrapolate from that the related outputs that might occur based upon information that was accurately represented or inaccurately represented that the AI is extrapolating answers from in those answers come out as looking either non-representative of the training, but somehow extrapolated from the training and becomes beneficial or come becomes completely crazy when the little hiccups in the information don't get stored into the AI's model and so people need to understand that AI is a tool. It is not a replacement for people because it lacks the nuances of implicit information that hasn't been explicitly stored in in a place where an AI can be trained on it and the AI is not gonna be able to make good representations of that information if it doesn't have enough bits in its neurons which store bits and but they have other things in there like waiting on the output that would be used as input to another neurological layer that is going to make a single decision, but that but certain inputs need to have different weddings, and and whenever it's trained, it has a a function that goes up and down these these tree structures of neurons and are mathematical neurons. It goes up and down these trees and it adjust the weddings in order to have less erroneous output on the on the outputs and it so it knows what the what the key says that that's the answer answers to certain questions whenever it's being tested on it's understanding of the information so that dots at size and it crosses its tease and it does everything necessary in order for the information to be accurately represented because what it's gonna do is it's going to compress the information in a very lossy format, but one that is can be extrapolated in certain cases to produce equivalent information. It covers a certain bunch of data points that or but it's gonna make errors and it's gonna make errors because that stuff doesn't have enough information now what you can do with this description, which I am not gonna do now cause I have to go to a function is you can take that text shove it into ChatGPT and based upon the context of what I'm saying, it will be able to give you a better answer of what it is. I'm talking about, and but if it's not going to hit having erroneous output, unless I talked about something which I doubt doesn't that that shares implicit knowledge that hasn't been written down implicit knowledge would be stuff that is hasn't been written like current events, current technologies, and things like that that it hasn't been trained on, and in certain cases, the AI will extrapolate information rather than say I don't know that information it's designed to extrapolate information it in certain cases the LLM cause this is what we're calling AI right now. Is these large language models need to be that they're not gonna be able to talk about current events or current technologies or things that are just coming bleeding edge stuff. They're not gonna be able to talk about stuff that was learned as a kid or such. Note: 5000 char limit hit
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: