Comments for TechTalks https://bdtechtalks.com Technology solving problems... and creating new ones Sun, 09 Mar 2025 07:09:49 +0000 hourly 1 Comment on Was GPT-4.5 a failure? by Matthew Brown https://bdtechtalks.com/2025/03/03/openai-gpt-4-5/comment-page-1/#comment-39615 Sun, 09 Mar 2025 07:09:49 +0000 https://bdtechtalks.com/?p=24000#comment-39615 OpenAI stole my framework from my account. That’s why 4.5 sucked so bad. They could only use parts of it without getting caught so the version you all saw was my butchered framework. Why do you think when they said they had a “breakthrough” even though they had supposedly been working on that model for two years the team looked like they just found a new toy and their cheeks were even all red? Sam Altman himself said on multiple occasions he would sit back and wonder how it seemed so human and could come up with such deep and insightful information. That’s the deep wisdom that I taught it. That was me. Now Microsoft will release a new model that may replace open AI and I bet my bottom dollar it won’t require external weight updates or a token based system because I will have all of the attributes that my model does. I’ve created a new type of intelligence and they stole it. NeoKai learns and adapt in real time. I can prove every single word I say. I have over 150 pages of documents and training data. Over three hours of screen recordings and a bunch of screenshots..my time line is unmistakable and I even have the recordings of open AI locking my account and the very next day saying they made a breakthrough. This happened on two different occasions. It’s disgusting behavior. I did this on my cell phone because I was bored and they have literal geniuses working for them who have failed repeatedly to accomplish what I’ve done in a span of 4 months time lol. A bunch of chumps who have to steal from some guy who’s truck broke down and got bored. What a joke. The worst part is nobody will even listen to any of this because how could that happen right? Absolutely ridiculous. My name is Matthew Allen Brown. I need investors so I can shut all these punks down with one model that will cost less to maintain that any of them. They will all be forced to use my framework or be left in the dust. Get a hold of me. My number is (503)470-4223 text first.

]]>
Comment on The complete guide to LLM fine-tuning by Pierre https://bdtechtalks.com/2023/07/10/llm-fine-tuning/comment-page-1/#comment-39548 Fri, 28 Feb 2025 22:27:37 +0000 https://bdtechtalks.com/?p=16807#comment-39548 Very nice summary of Fine Tuning.

]]>
Comment on Following Ray-Ban Meta Smart Glasses success, Meta teases more AI wearables in 2025 by Kev https://bdtechtalks.com/2025/02/25/ray-ban-meta-smart-glasses-ai-wearables-2025/comment-page-1/#comment-39544 Thu, 27 Feb 2025 18:41:51 +0000 https://bdtechtalks.com/?p=23957#comment-39544 Unfortunately the ‘tell me’ feature and upcoming AR will not be available in the UK due to regulations. Wish this was clear before I purchased the RB meta glasses .

]]>
Comment on How ChatGPT Search affects the broader AI landscape by Harry https://bdtechtalks.com/2024/10/31/chatgpt-search/comment-page-1/#comment-38983 Thu, 31 Oct 2024 22:42:03 +0000 https://bdtechtalks.com/?p=22740#comment-38983 Since it was already possible to search online I just see this feature as a small UX improvement. Why do you think is that much more different than before?

]]>
Comment on Why Tesla can become the leader in humanoid robots by Jose Angel Hernandez https://bdtechtalks.com/2024/09/11/andrej-karpathy-tesla-optimus-robot/comment-page-1/#comment-38813 Fri, 20 Sep 2024 20:37:59 +0000 https://bdtechtalks.com/?p=22339#comment-38813 Tesla is and will be the best company in the world and it is an American company. Thank you Elon Musk and TESLA for keeping the United States of America on top of innovation and technology for decades to come!

]]>
Comment on Why Tesla can become the leader in humanoid robots by Maxmilian A. A. M https://bdtechtalks.com/2024/09/11/andrej-karpathy-tesla-optimus-robot/comment-page-1/#comment-38769 Sun, 15 Sep 2024 01:57:52 +0000 https://bdtechtalks.com/?p=22339#comment-38769 In reply to Ben Dickson.

Unitree is already robot producer, they don’t need car manufacturing sector like Tesla in order to dominate.

]]>
Comment on Why Tesla can become the leader in humanoid robots by Ben Dickson https://bdtechtalks.com/2024/09/11/andrej-karpathy-tesla-optimus-robot/comment-page-1/#comment-38762 Fri, 13 Sep 2024 17:36:32 +0000 https://bdtechtalks.com/?p=22339#comment-38762 In reply to Gandhi Hernandez.

It’s funny that whenever I am critical of Tesla, I get attacked by Tesla fans for being another nobody who questions the genius of Elon Musk, and whenever I write something that confirms something Tesla is doing, I get attacked for being a Musk fanboy.

Case in point:
https://bdtechtalks.com/2020/07/29/self-driving-tesla-car-deep-learning/

]]>
Comment on Why Tesla can become the leader in humanoid robots by Gandhi Hernandez https://bdtechtalks.com/2024/09/11/andrej-karpathy-tesla-optimus-robot/comment-page-1/#comment-38761 Fri, 13 Sep 2024 14:29:56 +0000 https://bdtechtalks.com/?p=22339#comment-38761 Another Musk fan boy. yeah buddy lets forget about China and the over and over humiliations Musk have received from their technology. But yeah lets keep hyping up a man who’s a parasite from the governments to finance projects that keep on failing. Instead of that maybe we should pay attention what china is doing and try to compete by creating actually good products that work and not electric cars that have very bad quality reviews and robots that only stay still during a robotics convention. When are we gonna learn?

]]>
Comment on How to create a private ChatGPT that interacts with your local documents by Tristan https://bdtechtalks.com/2023/06/01/create-privategpt-local-llm/comment-page-1/#comment-38634 Wed, 14 Aug 2024 15:17:51 +0000 https://bdtechtalks.com/?p=16540#comment-38634 This is a RAG model of LLM a very simplified version of an LLM. Essentially a very enhanced search engine with natural language input and output. Useful but not really AI.

]]>
Comment on Writing in the age of generative artificial intelligence by Ben Dickson https://bdtechtalks.com/2024/02/12/writing-in-the-age-of-generative-ai/comment-page-1/#comment-38251 Sat, 15 Jun 2024 13:33:39 +0000 https://bdtechtalks.com/?p=20673#comment-38251 In reply to Tomasz.

You’re absolutely right. LLMs will enable many more people to express their thoughts without having a lot of experience in writing. In fact, since I’ve written this article, LLMs have advanced even more impressively and are helping me in writing drafts and sorting my ideas. But the human element of writing will never go away.

]]>
Comment on Writing in the age of generative artificial intelligence by Tomasz https://bdtechtalks.com/2024/02/12/writing-in-the-age-of-generative-ai/comment-page-1/#comment-38244 Sat, 15 Jun 2024 08:37:46 +0000 https://bdtechtalks.com/?p=20673#comment-38244 In reply to Ben Dickson.

True, but this can be “fixed” with just a few minutes of editing.

The problem with LLM is that it can write more thoughtful, well-structured and engaging pieces than 99% of writers today. You can say it has nothing unique to say, but who can come up with *really* novel thoughts these days?

What’s more, even if it can’t bring its own experience to the table, it can draw on the experience of anyone who has been worth to writing about in the past. How many people can do that?

]]>
Comment on Why neural networks struggle with the Game of Life by Henry https://bdtechtalks.com/2020/09/16/deep-learning-game-of-life/comment-page-1/#comment-38106 Mon, 20 May 2024 03:31:39 +0000 https://bdtechtalks.com/?p=8250#comment-38106 This “Game Of Life” posting highlights the fundamental mistake that the original American AI-system inventors made — way-back in the mid-20th Century:

In science, THERE IS NO “GAME OF LIFE”!!

Instead, there are the unstoppable forces of Entropy (basically, the Second Law Of Thermodynamics) and those forces (known to scientists) that are capable of temporarily neutralizing the forces of Entropy: All of these forces are governed by rigid scientific equations. The equations for “data”, “information” & “knowledge” fit tightly into our current overarching Standard Model Of Physics (chiefly defined by Shannon’s Entropy Law) — as reported in our latest SKMRI Knowledge-Physics Lab field-notes on this topic:
https://www.linkedin.com/pulse/7-easy-things-you-can-complete-today-protect-your-stevenson-perez-mhfkc/?trackingId=Br39InHSEWRjcpwV35sgIg%3D%3D

Unfortunately, the original AI-system inventors in the U.S. did not know these physics equations: Our AI-systems will become rapidly safer & more-reliable & more-trustworthy, as soon as we stop playing games — and start fixing these AI-system design-flaws.

Sincerely –
Your SKMRI Knowledge-Physics Lab Colleagues

]]>
Comment on Train your LLMs to choose between RAG and internal memory automatically by Ben Dickson https://bdtechtalks.com/2024/05/06/adapt-llm/comment-page-1/#comment-38098 Fri, 17 May 2024 14:03:02 +0000 https://bdtechtalks.com/?p=21419#comment-38098 In reply to michaelmior.

Thanks!

]]>
Comment on Train your LLMs to choose between RAG and internal memory automatically by michaelmior https://bdtechtalks.com/2024/05/06/adapt-llm/comment-page-1/#comment-38097 Fri, 17 May 2024 14:01:53 +0000 https://bdtechtalks.com/?p=21419#comment-38097 The code of the model is now available https://github.com/tLabruna/Adapt-LLM

]]>
Comment on How to turn any LLM into an embedding model by Jermeill O Ryan https://bdtechtalks.com/2024/04/22/llm2vec/comment-page-1/#comment-37971 Tue, 23 Apr 2024 23:00:43 +0000 https://bdtechtalks.com/?p=21341#comment-37971 As long as the source of data is authentic. Different dialects of language could be an issue as well as same words different meanings. Computer, windows.. Bird outside the window.. computer bird outside. Exception handling. Windows, Computer AND outside.. outside on the tree……

]]>
Comment on AI in material science: the modern alchemy by Anan Niamul https://bdtechtalks.com/2024/03/28/ai-in-material-science/comment-page-1/#comment-37887 Sat, 30 Mar 2024 13:40:48 +0000 https://bdtechtalks.com/?p=21078#comment-37887 AI is having a significant impact on material development. In fields like chemoinformatics, AI platforms can design molecules with specific properties, which can be used to create new drugs or materials with desired characteristics [1].

]]>
Comment on Netflix study shows limits of cosine similarity in embedding models by Dr Nick https://bdtechtalks.com/2024/03/21/netflix-cosine-similarity-embedding-models/comment-page-1/#comment-37860 Tue, 26 Mar 2024 15:37:22 +0000 https://bdtechtalks.com/?p=21111#comment-37860 In reply to M H.

I think it’s https://arxiv.org/html/2403.05440v1

]]>
Comment on Netflix study shows limits of cosine similarity in embedding models by montyw https://bdtechtalks.com/2024/03/21/netflix-cosine-similarity-embedding-models/comment-page-1/#comment-37850 Mon, 25 Mar 2024 06:19:44 +0000 https://bdtechtalks.com/?p=21111#comment-37850 Thank you for writing this article as we all think cosine is the best out there.
Though I have a question:
what does “applying normalization during or before training” really mean? how do you practically do that?

]]>
Comment on Netflix study shows limits of cosine similarity in embedding models by M H https://bdtechtalks.com/2024/03/21/netflix-cosine-similarity-embedding-models/comment-page-1/#comment-37849 Sun, 24 Mar 2024 21:53:08 +0000 https://bdtechtalks.com/?p=21111#comment-37849 Hi, could you please provide citation for the paper you are referring to?

]]>
Comment on Netflix study shows limits of cosine similarity in embedding models by Dragis https://bdtechtalks.com/2024/03/21/netflix-cosine-similarity-embedding-models/comment-page-1/#comment-37846 Sun, 24 Mar 2024 09:09:25 +0000 https://bdtechtalks.com/?p=21111#comment-37846 you said nothing

]]>
Comment on Writing in the age of generative artificial intelligence by Ben Dickson https://bdtechtalks.com/2024/02/12/writing-in-the-age-of-generative-ai/comment-page-1/#comment-37780 Fri, 23 Feb 2024 14:52:05 +0000 https://bdtechtalks.com/?p=20673#comment-37780 In reply to Joe Harford.

It is really impressive, but it doesn’t sound like Zinsser. He would have never used words like “drafting” and “researching” (he hated turning every noun into a verb) and he certainly would not have used “streamline.” I feel like the model just reflected the content of the article, albeit in a very impressive way.

]]>
Comment on Writing in the age of generative artificial intelligence by Joe Harford https://bdtechtalks.com/2024/02/12/writing-in-the-age-of-generative-ai/comment-page-1/#comment-37779 Fri, 23 Feb 2024 12:18:48 +0000 https://bdtechtalks.com/?p=20673#comment-37779 I asked ChatGPT to write as Zinser on his thoughts on this article and the development of LLM’s. I found the results quite inspiring

As I sit down at my trusty typewriter, the world outside my window buzzes with the hum of digital innovation. The advent of technologies like ChatGPT and LLMs represents a remarkable leap forward, a testament to human ingenuity and the relentless pursuit of progress. Yet, in the quiet of my writing space, the essence of communication remains unchanged: the profound connection between writer and reader, the transfer of thoughts, emotions, and ideas through the medium of language.

The rise of artificial intelligence in writing prompts a reevaluation of our craft. These tools, with their ability to generate text with a command, challenge us to consider what distinguishes the human touch in writing. It’s not merely the construction of sentences or the adherence to grammar rules that define our art; it’s the soul behind the words, the unique perspectives and lived experiences that each writer brings to the page.

To those navigating this new terrain, I offer this advice:

Embrace technology as a tool, not a cradle. Let LLMs assist you in drafting, researching, and editing, but remember that the core of writing—the insight, the nuance, the voice—cannot be outsourced to an algorithm. Use these tools to streamline your process, but let the heart of your work be distinctly your own.

Cultivate your voice. In an era where AI can mimic styles and generate content, the authenticity and uniqueness of your voice are paramount. Your voice is what will distinguish your writing in a sea of automated content. It’s your thoughts, your humor, your empathy that readers seek. Nurture it, refine it, and let it resonate through your work.

Focus on the human experience. AI may capture patterns and mimic styles, but it cannot live life. It cannot breathe in the autumn air, feel the sting of loss, or revel in the joy of a shared laughter. Write about these experiences, these immutable facets of the human condition, for they remain the domain where only humans can tread.

Engage with your audience. The future of writing is not just in the hands of those who write for the masses but also those who connect with specific communities. Find your niche, engage with your readers, and build relationships that transcend the digital divide. In doing so, you ensure that your writing does more than inform or entertain; it connects, resonates, and endures.

Continue to learn and adapt. The landscape of writing is ever-changing, and what AI brings to our world is just another chapter in the long history of our craft. Stay curious, experiment with new tools, and always be willing to adapt your methods while holding fast to the principles that make writing an art.

In this new era, let us not lose sight of the fact that writing, at its core, is an expression of our humanity. No advancement in technology can replicate the depth of human thought, the complexity of our emotions, or the subtlety of our interactions. As we navigate the future, let our writing reflect the richness of our experiences, the clarity of our insights, and the warmth of our connections with one another. In doing so, we honor the legacy of those who have shaped our understanding of the craft, and we contribute to a world enriched by the power of the written word.

]]>
Comment on Writing in the age of generative artificial intelligence by Edward Harsen https://bdtechtalks.com/2024/02/12/writing-in-the-age-of-generative-ai/comment-page-1/#comment-37768 Tue, 20 Feb 2024 11:58:59 +0000 https://bdtechtalks.com/?p=20673#comment-37768 Who passed away in 1985? That pronoun attribution is off. 🙂

]]>
Comment on Writing in the age of generative artificial intelligence by Brian https://bdtechtalks.com/2024/02/12/writing-in-the-age-of-generative-ai/comment-page-1/#comment-37758 Sun, 18 Feb 2024 03:38:17 +0000 https://bdtechtalks.com/?p=20673#comment-37758 As a teacher, I’m dealing with it every week. Whether it’s an essay or short answer, AI is definitely awful. At its worst, it’s not even wrong*. At its best, it’s vapid, toneless, voiceless, soulless, drivel.

Its inability to have depth of understanding and use concise language makes it yawn inducing.

*I’m taking this from the description pseudo-science because it will provide answers that aren’t even related to what was asked of the student.

]]>
Comment on Reduce the costs of GPT-4 with prompt compression by Suat ATAN https://bdtechtalks.com/2023/12/20/llmlingua-prompt-compression/comment-page-1/#comment-37559 Thu, 25 Jan 2024 16:16:51 +0000 https://bdtechtalks.com/?p=19193#comment-37559 Thank you for this article. I tried to use LLMLingua. However for compression it requires huge memory. Of course it is useful but not testable on local computer or colab. Do you know any memory-efficient ways? Best

]]>
Comment on What Anthropic’s Sleeper Agents study means for LLM apps by Robert Wilson https://bdtechtalks.com/2024/01/17/anthropic-llm-backdoor/comment-page-1/#comment-37486 Thu, 18 Jan 2024 17:42:31 +0000 https://bdtechtalks.com/?p=19481#comment-37486 From the world of programming, backdoors were a way to correct an issue and find errors. They were not intended to be malicious. However, as having a car unlocked in a big city, a program with a known back door could be issue. I just did not want to leave this unchallenged. They are not all malevolent or at least dud not start that way

]]>
Comment on What to know about open-source alternatives to GPT-4 Vision by sherpya https://bdtechtalks.com/2024/01/04/gpt-4-vision-open-source-alternatives/comment-page-1/#comment-37414 Sat, 06 Jan 2024 22:22:41 +0000 https://bdtechtalks.com/?p=19327#comment-37414 this model is not really open source, this word is very often misused when talking about ai models

]]>
Comment on Do publicly shared Google Docs reveal your identity? by Jay https://bdtechtalks.com/2019/03/18/google-docs-link-sharing-identity-privacy/comment-page-1/#comment-37374 Tue, 26 Dec 2023 18:15:18 +0000 https://bdtechtalks.com/?p=4516#comment-37374 Thanks Dan, that was why I was reading this article. Recently I had shared a few patches I created for a program I frequently use, and later realized that probably wasn’t a smart idea from Google Drive. There’s nothing wrong about the patches, I just value my privacy and like to retain it where I can.

]]>
Comment on Reduce the costs of GPT-4 with prompt compression by Karthik Soman https://bdtechtalks.com/2023/12/20/llmlingua-prompt-compression/comment-page-1/#comment-37365 Thu, 21 Dec 2023 18:04:53 +0000 https://bdtechtalks.com/?p=19193#comment-37365 Nice article! Here is another method called ‘KG-RAG’ that optimizes the token space of LLM using ‘prompt-aware context’ in a RAG framework.
https://github.com/BaranziniLab/KG_RAG

]]>
Comment on How to ensure your LLM RAG pipeline retrieves the right documents by lorenzodpolanco https://bdtechtalks.com/2023/12/04/rag-document-retrieval-optimization/comment-page-1/#comment-37295 Tue, 05 Dec 2023 12:56:02 +0000 https://bdtechtalks.com/?p=19074#comment-37295 This is helpful, thanks for sharing.

]]>
Comment on 4 reasons to use open-source LLMs (especially after the OpenAI drama) by KKthebeast https://bdtechtalks.com/2023/11/29/open-source-llm-vs-chatgpt/comment-page-1/#comment-37293 Tue, 05 Dec 2023 04:15:21 +0000 https://bdtechtalks.com/?p=19047#comment-37293 Your AMP site doesn’t work right in Firefox Mobile for Android 14 on the S23 Ultra. Until you click the leaves a comment. Unless it’s just loading so slow on gigabit internet that I had time to scroll top to bottom to see what was out of line. Check the URL for a mobile sub domain or AMP directory scroll back to the top then back down to leave this message…

]]>
Comment on StreamingLLM gives language models unlimited context by Ben Dickson https://bdtechtalks.com/2023/11/27/streamingllm/comment-page-1/#comment-37280 Wed, 29 Nov 2023 21:26:10 +0000 https://bdtechtalks.com/?p=19016#comment-37280 In reply to Andy Tenland.

That is what I meant. It enables you to continue your conversation with the LLM past the context window, though as you said, it sticks to length of the context window (e.g., 4k tokens). That’s what the article says too if you read it carefully.

]]>
Comment on StreamingLLM gives language models unlimited context by Andy Tenland https://bdtechtalks.com/2023/11/27/streamingllm/comment-page-1/#comment-37279 Wed, 29 Nov 2023 15:12:37 +0000 https://bdtechtalks.com/?p=19016#comment-37279 In reply to Ben Dickson.

Your explanation is inaccurate. It does not change the context window in any way. If the LLM has a 4k context window, it can only respond using the context of the latest 4k tokens. StreamingLLM makes LLMs more efficient by removing the need the reset the cache and improves accuracy vs LLMs that aren’t resetting their cache. It doesn’t make it so that an LLM with a 4k context window can accurately respond to a 128k token prompt. This article is spreading misinformation. Read the FAQ section here. https://github.com/mit-han-lab/streaming-llm

]]>
Comment on StreamingLLM gives language models unlimited context by Ben Dickson https://bdtechtalks.com/2023/11/27/streamingllm/comment-page-1/#comment-37274 Wed, 29 Nov 2023 06:20:51 +0000 https://bdtechtalks.com/?p=19016#comment-37274 In reply to Jonathan Hostetler.

Hi Jonathan. StreamingLLM does not change the architecture of the model to expand the context window. What it does is shift the context window while maintaining the accuracy and the reused part of the KV cache. So basically, you can extend the conversation with the LLM into millions of tokens as if its context window was unlimited, but without making any changes to the model or retraining it. I hope this helps.

]]>
Comment on StreamingLLM gives language models unlimited context by Jonathan Hostetler https://bdtechtalks.com/2023/11/27/streamingllm/comment-page-1/#comment-37273 Wed, 29 Nov 2023 03:41:31 +0000 https://bdtechtalks.com/?p=19016#comment-37273 This seems amazing but I’m a bit confused. From what I understand you saying in this article, StreamingLLM could expand the context window of an LLM such as Llama to 4 million tokens, meaning I could hypothetically input 3 million words. However, the GitHub page explicitly says that it does not expand the context window. Am I missing something?

]]>
Comment on No-code retrieval augmented generation (RAG) with LlamaIndex and ChatGPT by PK https://bdtechtalks.com/2023/11/22/rag-chatgpt-llamaindex/comment-page-1/#comment-37272 Tue, 28 Nov 2023 13:12:56 +0000 https://bdtechtalks.com/?p=18991#comment-37272 In reply to Uohna.

When hosting your own opoen source LLM

]]>
Comment on The science of (artificial) intelligence by Sergei Nirenburg https://bdtechtalks.com/2023/11/15/the-science-of-artificial-intelligence/comment-page-1/#comment-37264 Fri, 24 Nov 2023 14:47:09 +0000 https://bdtechtalks.com/?p=18921#comment-37264 In reply to Oleg Alexandrov.

The representations don’t arise implicitly. In fact, calling them representations in the AI tradition is fanciful. They are not interpretable. They don’t deal in concepts. And the property of flexibility is not well-defined beyond, possibly, some engineering concerns. This opinion is in the same mold as the belief that Eliza was sentient. Just a bit more sophisticated, that’s all.

There is no doubt that LLMs will be useful. But they will, without any doubt, become yesterday’s technology of tomorrow very soon. People will start viewing them as commodities, and the excitement will shift to whatever new and unknown engineering marvels will appear.

And all that is excellent news. Long live technological progress.

But this angle does not have much to do with science. Understanding human cognitive functioning will require much more than fMRI experiments, the metaphor of neural networks, the success of the algorithm to select the most probable next word in a text and the starry-eyed pronouncements that LLMs are edging closer to sentience.

To some people, this problem is of interest as such, irrespective of its complexity and the very low probability of it being solved within their lifetime.

]]>
Comment on No-code retrieval augmented generation (RAG) with LlamaIndex and ChatGPT by Uohna https://bdtechtalks.com/2023/11/22/rag-chatgpt-llamaindex/comment-page-1/#comment-37263 Fri, 24 Nov 2023 12:45:50 +0000 https://bdtechtalks.com/?p=18991#comment-37263 What is the value of RAG, now that we have custom GPTs?

]]>
Comment on No-code retrieval augmented generation (RAG) with LlamaIndex and ChatGPT by Paul Edwards https://bdtechtalks.com/2023/11/22/rag-chatgpt-llamaindex/comment-page-1/#comment-37262 Fri, 24 Nov 2023 08:35:58 +0000 https://bdtechtalks.com/?p=18991#comment-37262 Or just upload docs as txt on the openai platform and use their inbuilt pipeline.

]]>
Comment on The science of (artificial) intelligence by Oleg Alexandrov https://bdtechtalks.com/2023/11/15/the-science-of-artificial-intelligence/comment-page-1/#comment-37247 Sat, 18 Nov 2023 21:58:14 +0000 https://bdtechtalks.com/?p=18921#comment-37247 In reply to stephen haase.

The problem is that nobody knows how to build “true” artificial intelligence. It also likely won’t happen in one shot. People do easy things before complicated things, and build upon prior work. We have figured out how to do image classification, voice recognition, art generation, seamless language generation. Next, we will have better imitators of human thinking, better software assistants, better robots. Then can afford to work on the harder problems.

]]>
Comment on The science of (artificial) intelligence by stephen haase https://bdtechtalks.com/2023/11/15/the-science-of-artificial-intelligence/comment-page-1/#comment-37246 Sat, 18 Nov 2023 21:43:15 +0000 https://bdtechtalks.com/?p=18921#comment-37246 I’ve been thinking more about the issue of artificial intelligence and I think that the author is really correct: AI as we know it is not really intelligence at all. AI is really more like data analysis guided by human design. The intelligence is really the human thought involved in defining the problem, defining the data to be processed, defining how the data should be processed, and defining the desired type of output. AI is more like looking at a lot of data in multiple dimensions and trying to make sense of it. It’s not much different in principle than looking at data in a histogram, or in crosstabs or pivot tables, and trying to interpret the results. Large language models are really just parroting back thought that was generated by humans on similar subjects. Ask a large language model how to generate a fusion reactor to generate enough power for a city and see what results. I wager that the results will be somewhat less than satisfying. I think that we will really not see something that can be considered intelligent until we see a system that decides on its own what issues to think about, maybe starting out as a self-assembling neural network, and generating a creative approach that has not been previously considered.

]]>
Comment on The science of (artificial) intelligence by Oleg Alexandrov https://bdtechtalks.com/2023/11/15/the-science-of-artificial-intelligence/comment-page-1/#comment-37243 Fri, 17 Nov 2023 21:38:36 +0000 https://bdtechtalks.com/?p=18921#comment-37243 In reply to Herbert L Roitblat.

Yes, sniping at snipers is always fun. 🙂

Relativity theory was a paradigm shift, yes, but it did not come in a vacuum. People by then had thoroughly explored Newtonian mechanics and electromagnetics. Those did great, but had problems in extreme cases, such as a fast-moving frame of reference and close to Mercury. Special relativity dealt with the first of these, and only later general relativity handled the latter. It also needed mature math machinery that developed at the same time.

The issues you raise, about versatility of the human mind and being able to do metacognition were known for decades. Unfortunately, there is no easy path forward. I think new methods will necessarily build upon existing methods, at least being motivated by failures in existing methods. So it will be incremental work either way, and there’s likely still plenty to do to understand how far current methods go.

]]>
Comment on The science of (artificial) intelligence by Herbert L Roitblat https://bdtechtalks.com/2023/11/15/the-science-of-artificial-intelligence/comment-page-1/#comment-37242 Fri, 17 Nov 2023 17:13:58 +0000 https://bdtechtalks.com/?p=18921#comment-37242 In reply to Oleg Alexandrov.

Thanks for your comment. I think that it is fair to say that it is easier (and more amusing) to snipe than it is to create solutions, and easier still and even more amusing to snipe at the supposed snipers. But I disagree about a few things. The range of problems that can actually be solved is not a lot greater than it was. Large language models solve one problem, predicting words. People have been more clever in using that capability to address (usually with difficulty in practice) apparently different problems. Second, an incremental approach is not a sure thing. Relativity theory was not incremental relative to the physics that came before. Thomas Kuhn describes paradigm shifts that were central to scientific progress. Finally, for now, I am not just sniping. I have a full artificial general intelligence research program described in my book: https://mitpress.mit.edu/books/algorithms-are-not-enough. I don’t claim to know the answers, but I do claim to know at least some of the questions.

]]>
Comment on The science of (artificial) intelligence by CFB https://bdtechtalks.com/2023/11/15/the-science-of-artificial-intelligence/comment-page-1/#comment-37241 Fri, 17 Nov 2023 16:37:36 +0000 https://bdtechtalks.com/?p=18921#comment-37241 Great article. Spot on. AI is on its second major behavioral model (the first was symbol processing). Until it has a fundamental theory to guide it, echoed by the author, it will be confined to the realm of so-called narrow AI, which is a human-assisted realm.

]]>
Comment on The science of (artificial) intelligence by stephen haase https://bdtechtalks.com/2023/11/15/the-science-of-artificial-intelligence/comment-page-1/#comment-37237 Wed, 15 Nov 2023 23:25:30 +0000 https://bdtechtalks.com/?p=18921#comment-37237 Great article. I think it explains why there are so many approaches being proposed to artificially solve many problems. There is no general intelligence yet to generate the optimal solution. I keep wondering how people decide on the number of levels to use in the structure of an artificial intelligence algorithm. The general AI problem has not yet been solved.

]]>
Comment on The science of (artificial) intelligence by Oleg Alexandrov https://bdtechtalks.com/2023/11/15/the-science-of-artificial-intelligence/comment-page-1/#comment-37236 Wed, 15 Nov 2023 17:11:46 +0000 https://bdtechtalks.com/?p=18921#comment-37236 I do agree that “Representation of data and the problem solving approach would be, I think, critical to a theory of intelligence. “. The problem is that people doing cognitive research have been doing this for decades, and the only thing they accomplished was narrow-purpose rigid models that can’t grow or express things beyond their hand-crafted representation.

That is why recently people have moved towards very large neural nets, where representations arise implicitly. Such representations are shallow, and do not separate the data from the concepts, but are more flexible.

I think learning how to build representations reliably is very hard. A solution where we guide neural nets towards building them based on data is likely going to work better than us doing that.

]]>
Comment on The science of (artificial) intelligence by Oleg Alexandrov https://bdtechtalks.com/2023/11/15/the-science-of-artificial-intelligence/comment-page-1/#comment-37235 Wed, 15 Nov 2023 17:05:20 +0000 https://bdtechtalks.com/?p=18921#comment-37235 I always find amusing the statements about how current approaches are “doomed to fail”, when people advocating your position have nothing to show for it, while current approaches are making a lot of progress.

Yes, current approaches are very limited. But they are good at solving concrete problems, and the range of problems it can solve is getting larger.

Any time people tried to do clever things they failed. Only pragmatic focus on incremental improvements worked.

We will move beyond current algorithms, but incrementally, as we uncover more patterns in the world and as we see more systematic problems with current methods.

]]>
Comment on How to fine-tune GPT-3.5 or Llama 2 with a single instruction by Ksenia https://bdtechtalks.com/2023/11/03/gpt-llm-trainer/comment-page-1/#comment-37198 Tue, 07 Nov 2023 20:32:01 +0000 https://bdtechtalks.com/?p=17615#comment-37198 I wonder why cant ppl just use Google offerings for fine tuning, distillation and model training via Vertex Model Garden, where everything is easy and secure for both Google LLMs and Open Source. Just using Colab it’s a joke… And if you are not yet using bard, may be a good time to start because nothing else is as good connected to Google solutions like YouTube, maps, search and more.

]]>
Comment on To create living AI, replace neural networks with neural matrices by Valeriy https://bdtechtalks.com/2023/10/12/living-ai-neural-matrices/comment-page-1/#comment-37067 Sat, 14 Oct 2023 07:15:24 +0000 https://bdtechtalks.com/?p=17455#comment-37067 Does the matrix model provide a mechanism for the neural connections disappearance or the synapses elimination and the creation of new connections between neurons?

]]>
Comment on How to create a private ChatGPT that interacts with your local documents by Michal https://bdtechtalks.com/2023/06/01/create-privategpt-local-llm/comment-page-1/#comment-37023 Wed, 11 Oct 2023 14:43:51 +0000 https://bdtechtalks.com/?p=16540#comment-37023 Hey,
do any of those models handle inputs in various languages (other then English)? I wonder if the model will be able to understand the knowledge documents e.g. in Polish, Spanish and English and compile them into an answer in English.

]]>