Hungarian Conservative

The Hungarian Answer to ChatGPT: PULI

Pixabay
Dr Zijian Győző Yang of the Hungarian Research Centre for Linguistics stressed in a recent interview that Hungarian researchers are lucky that in Hungary, more and more research institutes have access to large supercomputers. His institute also recently acquired such a supercomputer, which enabled the Hungarian researchers to compete with their international rivals.

At the MCC Feszt held earlier this summer, Dr Zijian Győző Yang, a researcher at the Hungarian Research Centre for Linguistics, clarified the technological concept of the Hungarian-developed chatbot named ‘PULI’, Makronóm reported.

Since the emergence of the ChatGPT chatbot, the media has been devoting ample space to discussions about its future impact: whether it will take our jobs, and whether chatbots in general have negative or positive intentions and ultimate consequences. According to Dr Yang, many refer to anything that uses a smarter algorithm as artificial intelligence, but not everything that looks like it actually qualifies as AI. The expert clarified that ChatGPT is not truly artificial intelligence. It primarily learns through processing vast amounts of data, so it only knows what it has been taught.

‘In reality, ChatGPT has seen a lot of texts and tries to piece together words to make sense based on that. So, it doesn’t have intentions, it doesn’t want to destroy the world,’ he said, adding that as opposed to that, an AI robot actually has intentions, as that is what can be called intelligence. Therefore, when ChatGPT states something untrue, it is not lying, it is just making a mistake. It strives to answer everything based on the texts it has learnt, and this can sometimes lead to misinformation. Hence, there is no need to fear that it will subjugate humanity, as it lacks intentions—though its users certainly may have intentions, so the misuse of language models cannot be ruled out.

While chatbots may not have intentions, their creators do. The expert jokingly remarked that it is not that programmers want to create virtual companions for themselves; the goal is rather to develop human-like, lifelike chatbots, robots, or limbs in conjunction with other technological sectors like bionics, which will make our lives easier.

Dr Yang also added that there is no need to be overly frightened by the current technological boom, as similar major transformations have occurred before.

Jobs transform rather than vanish,

and we must adapt to the new technologies. ‘Take the case of Google Translate, for example. When it was launched, real translators panicked, fearing the new technology would take their jobs. However, that’s not what happened. The machine translator translates the text, and humans review it. So, the jobs didn’t disappear, they transformed, and the process became much faster,’ the expert explained.

Not only OpenAI has created generative language models, but other companies have also tried their hand at it. However, most have not seen the same level of success as ChatGPT. Dr Yang sees an advantage in this competition, as it has driven interest in the field recently. However, the popularity of large language models also raises concerns, as the expert also pointed out.

As a researcher, limitations make my job difficult. However, as a user, I believe limitations are necessary, especially since we can see that they can be very dangerous. So, it depends on what they are trained for and how they are used. These language models seem very intelligent and can solve many tasks, so they can be used for bad purposes as well. The other perpetual question about these large models is where they got their data from? No one ever asked me about that. It’s an ongoing legal question,’ he said. Nevertheless, he added that research is being conducted to address this issue.

Another emerging challenge is the proliferation of fake news, as it has become easier to generate false images, as it happened in the famous case where the Pope was shown as wearing a puffer jacket in a deep fake video. But the Hungarian expert stressed that

there have always been fake news, so that culture was not necessarily created by artificial intelligence.

Similar to ChatGPT, chatbots are also capable of generating music and images in various styles, such as oil paintings or simple photographs. Artists have raised concerns, wondering why one should spend hours digitally painting or drawing an image when technology can create it in seconds. Dr Yang believes there is no need to be so alarmed, as the difference between technology and value created by humans is still palpable. ‘Human beings can do things that machines can’t. They pour their soul into an artwork,’ he said. He also added that, while the machine can be taught, it does not understand irony or cynicism, which play important roles in various works. So, they face challenges in understanding and writing, the result of which is that the machines often produce monotonous texts.

According to the expert, programmers need not worry either about losing their livelihoods. While some language models code quite well, most tend to struggle with more complex tasks. Programmers can certainly make use of them, but it does not seem that these models will replace them entirely for now.

When asked about PULI, the researcher stated that Hungarian researchers have always performed well in the field of technology. However, he also mentioned that competing with ChatGPT is not easy, but that is what language models are being measured against now. He explained that training a language model the size of ChatGPT requires enormous resources, so competing without these resources is out of the question. But there is a positive outlook:

‘We are lucky that in Hungary, more and more research institutes have access to large supercomputers. The Hungarian Research Centre for Linguistics, where I work, recently acquired such a supercomputer, which enabled us to compete. This is how we created the PULI language model,’ the researcher shared.

However, the PULI should not be imagined as an imitation of ChatGPT. Dr Yang explained that

ChatGPT was trained on one billion words, while the Hungarian PULI was trained on forty billion words,

and the three-language version on one hundred billion words. This significant difference is noticeable; therefore, for now, PULI is suitable not for conversations but rather for completing sentences. In terms of size, PULI has 6.7 billion parameters, but the development does not stop there: ‘We will soon sign a contract with the Debrecen Komondor, and with luck, we will reach 13 billion, and our goal is seventy billion.’

Thus PULI can become a chatbot suitable for conversations soon. The beta version is already available, where one can chat with the Hungarian language model, and according to plans, the stable version might be born by the end of the summer. However, the goal is not necessarily to make PULI a competitor of ChatGPT; it is more about Hungarian researchers understanding the workings and development processes of chatbots like ChatGPT. As the researcher put it: ‘We need our own to fully understand how it works.’

Recently, Makronóm also discussed the amount of energy required to train such a machine. Responding to their question, the expert explained that training a language model is indeed not very environmentally friendly. For instance, the machines of Google, the creator of the LaMDA language model, consume a large amount of energy, compared to an average computer. So efforts are being made to come up with more energy-efficient and thus more environmentally friendly solutions.


Read more:

MCC Presents Innovative Tech Helping Students and Faculty Give Their Best
Dr Zijian Győző Yang of the Hungarian Research Centre for Linguistics stressed in a recent interview that Hungarian researchers are lucky that in Hungary, more and more research institutes have access to large supercomputers. His institute also recently acquired such a supercomputer, which enabled the Hungarian researchers to compete with their international rivals.

CITATION