![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.ml/pictrs/image/2QNz7bkA1V.png)
How is it useful? Dessert vs food?
“At the end of the lunch nowadays everyone want to have a dessert, but this is wrong because they should have food”…
The sentence AI vs algorithms sounds pretty much like this
How is it useful? Dessert vs food?
“At the end of the lunch nowadays everyone want to have a dessert, but this is wrong because they should have food”…
The sentence AI vs algorithms sounds pretty much like this
AI is a broad family of statistical and simulation algorithms.
They don’t replace algorithms, they are algorithms very powerful for some cases. For other cases they are less powerful, or overkill and they shouldn’t be used. But there is no dichotomy, as one (AI) is part of the other (algorithms)
It’s not an average machine though. It’s a non-linear predictive system. Averages suck in non-linear predictions
Wow!
What about the flaming and jelly windows? That was cool stuff
For research labs, dell workstations used to be great. Put debian on it and you could forget about problems. I don’t know if it is the case anymore.
The selling point of xps is that they are light. Many of us just need light laptops nowadays, as almost any hardware is more than capable of any task with the exception of gaming. But I have never gamed on laptops
Xps developer edition has been a thing since almost a decade. I bought a xps13 with ubuntu in Europe. I replaced it as soon as it arrived though. The built in OS was not “standard”.
I still use it almost daily. Battery has gone, but everything else works
Light ThinkPads are not cheap either
Main issue of mine was the wifi card. Awful. Everything else very nice.
But 500 from the 60s is super cool!!
Mine was a comment to say that llms are not just fancy auto complete. Although technically an evolution, it is a bit like saying humans are fancy worms because evolved from worms
Ahahah, emacs is immortal
It borrowed the concept from old editor such emacs. It is a modern emacs. A single editor to do literally everything via plugins. The idea is that one needs to learn a single editor to master everything.
It is very powerful for people who do multiple things. It’s not worthy if the whole job is to simply writing java or c#. In that case a dedicated ide is better
VSCode is a modern emacs. Similar concept, a single editor to do everything via extensions. That’s the selling point. “young people” never had the chance to work with a similar concept, this is why they found it so revolutionary (despite being a concept from the 70s).
I use it because I am forced to use a windows laptop at work, and emacs on windows is a painful experience
Common Reinforcement learning methods definitely are.
LLMs are an evolution of a markov chain as any method that is not a markov chain… I would say not directly. Clearly they share concepts as any method to simulate stochastic processes, and LLMs definitely are more recent than markov processes. Then anyone can decide the inspirations.
What I wanted to say is that, really, we are discussing about a unique new method for LLMs, that is not just “old stuff, more data”.
This is my main point.
A markov chain models a process as a transition between states were transition probabilities depends only on the current state.
A LLM is ideally less a markov chain, more similar to a discrete langevin dynamics as both have a memory (attention mechanism for LLMs, inertia for LD) and both a noise defined by a parameter (temperature in both cases, the name temperature in LLM context is exactly derived from thermodynamics).
As far as I remember the original attention paper doesn’t reference markov processes.
I am not saying one cannot explain it starting from a markov chain, it is just that saying that we could do it decades ago but we didn’t have the horse power and the data is wrong. We didn’t have the method to simulate writing. We now have a decent one, and the horse power to train on a lot of data
We do. I pay to work with it, I want it to do what I want, even if wrong. I am leading.
Same for all professionals and companies paying for these models
It’s a bit like saying a human being is a fancy worm. Technically it is true, we evolved from worms, still we are pretty special compared to worms
LLMs are not markovian, as the new word doesn’t depend only on the previous one, but it depends on the previous n words, where n is the context length. I.e. LLMs have a memory that makes the generation process non markovian.
You are probably thinking about reinforcement learning, which is most often modeled as a markov decision process
And British used to joke about Spanish siesta… Now they want to close everything at Spanish spring’s temperature