Perspective of Brain derived from Google Lens App
Recently I used Google Lens application for some special purpose and suddenly some things came up in my mind. Lets see how brain works.
Identifying Objects
How do you identify a Rose? Its a flower, how do you know its a flower? There is a memory function inside the brain which stores the information for first time. Next time when the same object is bought in front on senses (Eyes), it recognizes the Rose.
More the storage and information inside the brain, more things will be recognized by the preceptor.
Lets go Deep:
- Humans cannot recognize things which are not in their ‘Memory’.
- Humans don’t recognize thing where attention is not present. Example, if you like Mercedes, you will not notice Toyota even if its besides Mercedes on road. We can only recognize/notice things where we are concentrated.
- Humans recognize things as per their bias. A person wearing black glass will see the world black. Even though its bright. A selfish person sees everyone as selfish (due to selfish experiences and memories/information present in the brain of person).
In all the above three cases, we will not be able to recognize things in their original form.
How Google could be building a brain?
Google recognizes each and every flower/plants/objects that you point Lens too. Don’t you think its having more ‘information’ compared to any person in the world?
Also there would be biases or attention issue with Google Lens (Point 2 and 3). Hence perception of google lens is far better.
Knowing a person:
How do you know and understand a person?
- Based on person’s waking habits, life style, feelings, emotions, reactions, food habits.