Link to the python library (link in the article is broken atm): https://byzfl.epfl.ch
> ByzFL is a Python library for Byzantine-resilient Federated Learning. It is designed to be fully compatible with both PyTorch tensors and NumPy arrays, making it versatile for a wide range of machine learning workflows.
The thing with training a LLM with basically as much of the internet as you can process, is that it basically becomes a automated "wisdom of the crowds" machine, which isn't all bad, but not all-knowing either.
I wonder, as AI becomes more and more complex and incomprehensible, and the risk of loss of control becomes greater and greater, that the solution will simply be two adversarial AI's, one which generates, the other to detect deception/misalignment, and that at some point, when things have become so advanced beyond comprehension, we just have to trust the ying yang balance of good vs evil AI gods. /s
Interesting, can be relevant: there is an idea for the unicorn AI safety startup to get currently almost 100% unprotected (from AI botnet) consumer GPUs into a cloud to get Google-level security (each GPU can bring you $30-1500 in profits per month, you can share it with the user, the user can play GPU game from any device, use any free or paid AI model, everything really becomes better, you can include a 5g modem), here's the full proposal (the author is probably dyslexic) https://melonusk.substack.com/p/notes-on-euto-principles-and...
Link to the python library (link in the article is broken atm): https://byzfl.epfl.ch
> ByzFL is a Python library for Byzantine-resilient Federated Learning. It is designed to be fully compatible with both PyTorch tensors and NumPy arrays, making it versatile for a wide range of machine learning workflows.
The thing with training a LLM with basically as much of the internet as you can process, is that it basically becomes a automated "wisdom of the crowds" machine, which isn't all bad, but not all-knowing either.
"Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?"
I wonder, as AI becomes more and more complex and incomprehensible, and the risk of loss of control becomes greater and greater, that the solution will simply be two adversarial AI's, one which generates, the other to detect deception/misalignment, and that at some point, when things have become so advanced beyond comprehension, we just have to trust the ying yang balance of good vs evil AI gods. /s
Program is that detector AI can be misaligned too, especially if it is at similar capability level as generator AI. https://www.youtube.com/watch?v=0pgEMWy70Qk
Interesting, can be relevant: there is an idea for the unicorn AI safety startup to get currently almost 100% unprotected (from AI botnet) consumer GPUs into a cloud to get Google-level security (each GPU can bring you $30-1500 in profits per month, you can share it with the user, the user can play GPU game from any device, use any free or paid AI model, everything really becomes better, you can include a 5g modem), here's the full proposal (the author is probably dyslexic) https://melonusk.substack.com/p/notes-on-euto-principles-and...