• It is quite evident that exosomes play a significant role in programming the cellular age. However, it is uncertain whether they contain anti-aging factors.

    In order to maintain homeostasis, multi-cellular organisms attempt to keep the concentration of each component at a specific level. Therefore, adding exosomes from young animals might reduce the number of exosomes with pro-aging factors which explains the rejuvenation effects measured in both studies.

    It is highly likely that exosomes with pro-aging factors exist due to the fact that senescent cells also release exosomes.

  • I heard a while back the Conboys were doing a study now where they will also record the epigenetic age after treatment. I doubt I can find it again but in China they claimed they got about a 20% reduction in methylation age on some organs with plasma dilution on mice. I still think the exosomes probably have a bigger effect but diluting first might give a little more effect possibly.

  • Hi Mike, thank you for sharing the article. I found it to be quite interesting. However, I didn’t notice any mention of the removal of extracellular vesicles from the blood.

    It’s unfortunate that neither of the papers discussed the objections of leading scientists in the field, such as Conboys. According to their research, dilution of pro-aging blood factors leads to rejuvenation. To confirm this, instead of injecting a saline solution in the control group, the researchers should have checked whether plasmapheresis leads to the same rejuvenation effects.

    On a separate note, I found the in-vitro experiment with young extracellular vesicles in the Nature paper to be quite intriguing. I wonder how the Conboys would explain this effect.

  • https://www.nature.com/articles/s41598-023-39370-5

    Here is that link to that study. I also remember a paper that just focused on the muscles and it claimed young EVs increased Klotho and some other things there.

  • Thanks for the link. For some reason, this paper did not appear in my channels. I am pretty disappointed. It seems Katcher’s miraculous E5 is nothing else than the exomes. I doubt they will get a patent for a procedure that does nothing but extract the exomes.

    Do you have a link to the second paper you mentioned?

  • https://www.biorxiv.org/content/10.1101/2023.08.06.552148v1.full

    It appears Katcher’s group has revealed its mostly the exosomes in the young pig blood that is reversing the epigenetic age. There was also another study recently that said when they removed the extracellular vesicles from the blood they didn’t get the beneficial effects they did when they remained in the transferred young blood. It would lead you to think this should work for people also and give a big healthspan effect with some lifespan extension.

  • Rodd, I am afraid there is a big misunderstanding here. It is my fault, I should have made this clear in the article. I worked a lot with neural networks for my thesis which made me assume that this is self-evident. In discussions on Reddit, I noticed it is not

    It is crucial to comprehend that the inaccurate responses from ChatGPT are not related to online misinformation. Some individuals may find it hard to accept this fact since we generally associate computers with processing data and not generating it. However, neural networks function in a distinct manner than the symbolic systems we have been dealing with so far.

    Take the example I discussed in the article. You’ll not find any web page that claims that I was MVP for Windows Server or PowerShell and no site exists where information about my demise has been published.

    During the learning process, neuronal networks simply identify statistical correlations in the training data. In this specific case, ChatGPT discovered various websites that mentioned my role as the founder of 4sysops and as an MVP. However, due to the abundance of content regarding Windows Server and PowerShell on 4sysops and the existence of many MVPs in this field, ChatGPT erroneously assigned a high probability to the incorrect information that I was an MVP for Windows Server and PowerShell. Such a mistake would not have been made by a professional journalist.

    The second example further highlights the issue at hand. There is another person with the same name as mine, who you can easily find information about on the internet. To my knowledge, he was a decorated soldier and has no connection to the field of IT. A journalist would not make the mistake of confusing us, as it is quite clear that we are two distinct individuals.

    ChatGPT does not understand the meaning of the text that it is trained with. It only relies on statistical correlations between symbols without knowing their meaning. To improve accuracy, OpenAI needs to expand the connections in the neural network. However, this can be a challenge as the required computing power increases exponentially with the number of parameters.

    This is what OpenAI essentially did when they upgraded from GPT-3.5 to GPT-4, which reportedly utilizes 1 trillion parameters compared to the former’s 175 billion. And now look at the slight increase in accuracy despite such a significant change in parameters. Furthermore, it seems that all GPT-4-based services are currently limited in the number of prompts per user due to computational power constraints, as even Microsoft’s Azure may not be equipped to handle the load properly. Given these limitations, I am doubtful that we can expect significant improvements in accuracy anytime soon.

  • Do you think we will ever reach a point when AI will be reliable, given the amount of dissinformation available to the AI in which it draws its conclussions from?

    Conspiracy theorists often state that one sure way to cover up a conspiracy is to saturate the event with mass amounts of disinformation on the topic. How then can we be sure that the answers/information provided by AI is accurate and reliable if the information it uses is flawed ?

    Personally, I agree it is usefull but unreliable unless AI engineers can find a way to filter out disinformation, which then may introduce bias?

  • Yeah, neural networks have the same weaknesses as human brains. Even though the neural network runs on a computer, you can’t rely on its computations. Some researchers will say that this is a feature, not a bug because this is why neuronal networks can perform tasks that symbolic systems cannot.

  • As a test I asked ChatGPT to multiply the same two very large numbers together four times and got four different answers.

    Question (asked four times in succession):
    what is 487589112024886 x 65866212482446

    The product of 487,589,112,024,886 and 65,866,212,482,446 is:

    ChatGPT Ans 1 32,102,954,111,984,200,000,000,000,000
    ChatGPT Ans 2 32,110,286,529,493,600,000,000,000,000
    ChatGPT Ans 3 32,102,596,610,474,400,000,000,000,000
    ChatGPT Ans 4 32,110,986,153,361,700,000,000,000,000

    Will the real number please stand up!

  • The issue isn’t that AI encounters inaccurate information online. Even if the sources provide completely accurate information, the response generated by the AI can still be incorrect. The article discusses an example where ChatGPT mistakenly mixed up two individuals with identical names. This mistake would never have been made by a human reader.

  • Personally, I find nothing surprising about this. I’ve been in the IT field for over 40 years. While most of the IT world relies heavily on their “Google-Fu”, I’ve found it to be inconsistent and variously inaccurate. No surprise then that an AI program has trouble distinguishing fact from fiction. Humans, whose brains are the ultimate computers, are subject to false memories which can be caused in multiple ways, including something as simple as being repeatedly exposed to inaccurate information. When you consider the abundant amount of inaccurate information everywhere on the web, it’s really no surprise that an AI program can’t tell the difference between fact and fiction Remember, the A in AI stands for Artificial, in other words, not genuine.

  • Yeah, these apologies are quite funny. I’ve encountered similar cases. There have been instances when I requested a reference from ChatGPT, but the link provided was completely irrelevant.

  • If you don’t mind, I would like to share my own little a-ha type of chat I had with Chat-GPT. It was one of a few where it hallucinated big time. in fact, whenever the question was a little complex, of the trodden paths, I expect it to hallucinate and not to share facts. So here I go, I wanted to know whether ChatGPT knew about its AI-forfather “Racter”…

    Me
    Who wrote “The policeman’s beard is half-constructed”?

    “The policeman’s beard is half-constructed” is a book by the British author and poet R. A. Lafferty. The book is a collection of science fiction and fantasy short stories and was first published in 1979. Lafferty is known for his unique and often surreal writing style, and “The policeman’s beard is half-constructed” is considered one of his most notable works.

    Me
    I thought it was written by Racter, an AI like you. Wrong?

    You are correct that Racter was an early AI language model developed in the 1980s that was designed to generate original text. However, R. A. Lafferty is the author of the book “The Policeman’s Beard is Half Constructed,” not Racter. While Racter did produce some interesting and unusual texts, it was not capable of writing full-length books like Lafferty did. As an AI language model, I am capable of generating text like Racter, but I am more advanced and capable of generating more human-like and coherent responses, as I have been trained on a much larger and more diverse dataset of text.

    Me
    What source says so?

    There are several sources that confirm that R. A. Lafferty is the author of “The Policeman’s Beard is Half-Constructed” and that Racter was not involved in its creation. Here are a few sources that you can refer to:

    The book’s entry on Goodreads, a popular book review website, attributes the book to R. A. Lafferty.

    The book’s entry on Amazon also lists R. A. Lafferty as the author.

    A search for “The Policeman’s Beard is Half-Constructed” on Google Scholar brings up numerous academic articles that attribute the book to R. A Lafferty.

    A search for “Racter” on Google Scholar brings up numerous articles discussing the AI language model, but none of them mention any involvement in the creation of “The Policeman’s Beard is Half-Constructed.”

    Based on these sources, it is clear that R. A. Lafferty is the author of “The Policeman’s Beard is Half-Constructed,” not Racter.

    Me
    The book entry on Amazon? Can you share a link, please?

    Certainly! Here’s the Amazon link to “The Policeman’s Beard is Half-Constructed” by R. A. Lafferty:

    https://wwwamazon.com/Policemans-Beard-Half-Constructed-R-Lafferty/dp/0934438904

    As you can see from the book description and author information, R. A. Lafferty is listed as the sole author of the book.

    Me
    The link is dead. Does not exist, never existed. Your info is wrong.

    I apologize for the mistake. It appears that the link I provided earlier is no longer active. I appreciate you bringing this to my attention. Here’s another link to the book on Amazon:

    Again, I apologize for any confusion caused by the previous broken link. Thank you for allowing me to correct the error.

    Me
    Again a dead link, what’s wrong?

    I apologize again for the mistake. It seems that the book may no longer be available on Amazon. Here is an alternate link to the book on the publisher’s website:

    https://www.centipede.press/the-policemans-beard-is-half-constructed/

    Again, I apologize for any inconvenience caused by the previous broken links, and I hope this new link helps.

    Me
    Dead again. Can you share a working link, please?

    I apologize for the inconvenience. It seems that the book may be out of print or otherwise unavailable for purchase on major online retailers. However, you may be able to find used copies of the book through online marketplaces or used bookstores. Alternatively, you may be able to borrow the book from a local library or through interlibrary loan.

    Again, I apologize for any frustration or confusion caused by my previous attempts to provide links to the book.

    Me
    If you search for the book title on Amazon, you find it available. Author is Racter.

    I apologize for any confusion that my earlier responses may have caused I want to correct my previous answers, as I was mistaken in stating that “The Policeman’s Beard is Half-Constructed” was written by R. A. Lafferty.

    Upon doing further research, I have found that “The Policeman’s Beard is Half-Constructed” is actually a book that was generated by the AI language model known as Racter in 1984. The book is a collection of surreal and experimental prose, written using an early form of natural language processing.

    Again, I apologize for any confusion or misinformation that I may have provided earlier. Thank you for bringing this to my attention so that I could correct my mistake.

    Me
    Ok, let’s analyse this mistake. On Goodreads, it doesn’t attribute Lafferty, either. It says Racter.

    You are correct. I apologize for the confusion that I caused earlier. After reviewing my sources, I see that I made a mistake in my earlier answers regarding the authorship of “The Policeman’s Beard is Half-Constructed.”

    As you pointed out, the book is in fact attributed to Racter, an AI language model that was developed in the 1980s by William Chamberlain and Thomas Etter. The book was generated by Racter using a natural language processing algorithm, and it is a collection of surrealist and experimental prose.

    Again, I apologize for any confusion that my earlier responses may have caused, and I thank you for bringing this to my attention so that I could correct my mistake.

    Me
    What made you tell me that on Goodreads, Lafferty is said to be the author? Was that a flawed source?

    Yes, that was a mistake on my part. I apologize for the confusion that I caused earlier. I should have checked my sources more thoroughly before providing an answer, and I regret any confusion that I may have caused.

    To clarify, I have checked again and can confirm that on Goodreads, as well as other reputable sources, “The Policeman’s Beard is Half-Constructed” is attributed to Racter, the AI language model developed by William Chamberlain and Thomas Etter in the 1980s.

    I appreciate your patience and understanding, and I will strive to be more thorough and accurate in my responses in the future.

    LOL. How it insists on Lafferty… 🙂

  • Mike, thanks!

    I’ve used it for generating PowerShell code and what’s interesting is that it occasionally used functions that don’t actually exist. The code was syntactically correct and it all made perfect sense. The only problem was that it didn’t work at all.

    You are right “hallucination” is not really correct the term. The problem is that we do not possess suitable terminology and therefore, we use the phrases that are most similar.

  • Excellent article Michael and makes the point very well.
    I studied very early neural networks and saw potential but doubted it would take off for a long time.

    I find it the most exciting thing in IT to have happened for a long time but I also tested it with different prompts, from the acting skills of Michael Cane, summarising a large text and writing PowerShell.

    I quickly found it unashamedly makes random things up: dates, code, relationships and where someone was born.
    As you say, I have to Google everything it types to “fact check” it.

    It was especially bad at writing PowerShell from scratch, but actually very good at reviewing code in a kind of grammar/style check.

    The problem I see is that it’s too humanised which confers bias, and it’s almost fawning eagerness to please by spewing reams of words that whilst good English are utter lies. AI engineers call it hallucination which again uses the humanising trap. It’s a big, dead lump of metal and silicon. It does not *know* or think or feel. It just calculates.

    Is it useful – yes, do I trust it hell no!

  • If the data source is reliable or not is not really a problem. Conventional search engines have solved this problem long ago. One major issue with AI is its inability to truly comprehend text. Rather, it generates new text through statistical correlations. As a result, there is always a chance that the AI may generate inaccurate content.

  • these so called powerful large language models need input and where does that input come from? are datasources reliable? not sure especially if we think the LLM scoured the internet…..if that’s the case how does it recognize if a source is legitimate or not? unaswered questions about the source of data leaves a lot of doubt in my mind. but when i use it…..it’s mainly to correct my syntax for coding or scripting languages. I have used it to try to pinpoint a failure in a sql statement i was creating. it didn’t give me a working solution (some of it was crazy) but there was a line in it that i recognized as it might be useful (and it was but the entirety of the statement was way off). it’s a useful tool, like a boiler plate type of usefulness. chatgpt isn’t coming for our jobs. you still need a person to verfiy it and prod it accordingly – there have been times i had to prod it several times in order to get close to what i was looking for. having the ability to recognize good data from bad is essential…..a good deal of discernment is needed.

  • When will AI surpass Facebook and Twitter as the major sources of fake news? To begin with, I want to emphasize that I have no reservations about technology or artificial intelligence. In fact, I once wrote a Master’s thesis comparing […]

  • I hope you are right. Money and the will to do it seem to be in place now and with AI assisting you may end being right. Until then I am just doing some basics like taking a large amount of liposomal Fisetin every 6 months or so and doing some plasma donations to remove some bad stuff every year. I also take calcium akg and niacinamide. I am 54 now but since I started those things my knee inflammation went away and my blood readings and blood pressure improved.

  • Load More
© hplus.club 2024

CONTACT US

We're not around right now. But you can send us an email and we'll get back to you, asap.

Sending

Log in with your credentials

or    

Forgot your details?

Create Account