Chinese Room Argument: Can AI Have Consciousness?

Why is it impossible for AI to think like we humans according to this argument?

The Chinese Room Argument is a thought experiment proposed by philosopher John Searle in 1980. I mentioned what a thought experiment is and what are they used in philosophy in one of my writings which I tackled another thought experiment. Here's the link to that writing below:

https://www.typelish.com/b/the-knowledge-argument-can-consciousness-be-deduced-to-purely-physical-107674

Chinese Room is one of the best-known arguments against the notion of strong AI. Strong AI claims that AI can think, understand, and have other cognitive states. It is conscious in contrast to weak AI, which claims that AI only simulates thought. So, the argument is like this: Imagine a monolingual English-speaking person who doesn't understand any Chinese placed inside a room. He is given a set of instructions in English that allows him to manipulate Chinese symbols according to given rules. The person is given questions in Chinese from one side of the room, and by following the instructions in the book and manipulating the symbols, he produces Chinese answers. He does this by the inputs he receives from the book but doesn't understand the language itself in any part of the process. He then passes his answers in Chinese to the other side of the room.

For the people outside the room, it seems as if the person inside the room understands Chinese and is capable of producing Chinese answers. However in reality the person is just following a set of rules without actually understanding the meaning of the symbols. Now, imagine the whole room and the process inside it as the AI, the given instructions as the inputs given to them, and the answers that are produced in the room as the outputs AI gives us. In this case, just like the person in the room, AI doesn't truly understand or have mental states, it purely applies the instructions that it is given to produce outputs according to the inputs it is given.

All input AI can have is purely syntactic (symbolic) and hence is its internal processes. It cannot understand the semantics (meaning) behind the symbols. You cannot get semantics purely from syntax. Since we humans can have more complex inputs and data about the outside world through our sensory organs, our brains are not similar to AI. No matter how intelligent AI gets, and how much it will be able to answer complex questions, it will only process in syntax. Hence according to Sarle, AI will never be able to think like we do, and can only simulate thought since syntax itself has no meaning without semantics.