Some people fear that if writing no longer takes reality as its point of departure but instead draws from other texts, then AI-generated writing could be seen as equally valid as human writing, perhaps even superior, with no grounds left to claim authenticity as a distinguishing value.
Why would anyone think this in the first place? In literary studies, it is common to speak of intertextuality, and some theoreticians, including T. S. Eliot and Harold Bloom, even suggest that writing is nothing but an interaction with tradition and a recombination of bits and pieces of text.
In my review of the AZ-900 course, I complained that it was overly theoretical and that more exercises would have been beneficial for reinforcing key concepts. The more practical nature of AI-102 confirms the correctness of my intuition.
A friend of mine will interview for a job next week. She asked me how I usually prepare for interviews and I told her that I typically do a lot of research – less about the position and more about the company.
I am not alone in thinking this is a good idea. Richard Bolles, in his popular What Color Is Your Parachute, has a chapter called “Fifteen Tips About Your Job Interview,” and a section of this is simply called Do Your Homework. What’s the homework? Here’s what Bolles says:
Systematicity refers to the apparent structured nature of cognitive abilities: if a cognizer can entertain one thought, it can typically entertain others that are structurally related.
For example, if it can infer P from P&Q, it can also infer P from more complex disjunctions like P&R&Q. Likewise, if it understands that a loves b, it can typically also understand that b loves a, reflecting a sensitivity to structural transformations.
If a cognizer understands John Loves Mary, it also understands Mary Loves John. Similarly, if a cognizer can perceive a red triangle on top of a blue square, it can also perceive a blue square on top of a red triangle.
I have recently been reading some of Montaigne’s essays again. I think that there is a lot in his work that is relevant to present discussions about AI. For example, he frequently warns against anthropomorphism. Could engaging with his essays and learning more about his reservations in this respect offer us an interesting perspective on our own practice of doing so in the case of AI? I think so, and I might explore this and other topics at a later point. Here I’ll be interested in something a bit more fluffy.
In their recent insights report Six Key Dimensions for Succesful AI Adoption, Implement Consulting Group introduces the concept of GPT hesitancy to explain the caution underlying adoption rates.
GPT hesitancy, Implement suggests, stems from two conflicts. One personal, the other inter- and intraorganisational. I won’t have anything to say about the latter here.
The first reflects an internal tension between the desire to benefit from the productivity and quality gains that AI offers with the fear of appearing less competent to colleagues by relying on AI, a practice sometimes perceived as a form of cheating.
DEVONthink has been my weapon of choice for years when it comes to organizing information on my computer. I recently changed to Linux, I need an alternative. I decided to build one myself. Here I try to record my considerations and learnings.
What to Build
I decided that I wanted to create a CLI semantic search tool that can also be used in lf to rank a variety of text files in ascending or descending order.
In public discourse, intelligence is often talked about as if it is a solved problem. Claims on the part of the big players in the space to the effect that their chatbots have Ph.D. level intelligence or that AGI is within reach obviously only further this idea.
But what should we believe?
Do we have an agreed upon definition that we can just hold any candidate AI up against or that could be made the foundation of a test of its ability? Indeed, even supposing that we had and could, should we? Turing famously introduced his imitation game to circumvent the problems that defining intelligence comes with, specifically the fear that it might ultimately just reflect our own conceptual limitations and anthropocentric biases about what constitutes genuine understanding.
A few days ago, a new issue of Think:Act Magazine landed in my inbox. I saw that it featured an interview with the now-deceased Pope Francis’ advisor on AI, Paolo Benanti, and it immediately caught my interest.
Benanti, I thought, might offer a perspective on AI that differs from the mainstream – and he does. But I think it’s worth asking whether his response is shaped by his affiliation with the Catholic Church and its philosophical underpinnings.
I recently completed the course Generative AI with Large Language Models from DeepLearning.AI on Coursera.
I took this course because I eventually want to get my hands dirty with AI and complement my theoretical understanding with practical ability.
From Theory to Practice
I felt like I got what I came for: a more functional understanding of concepts I already knew from my thesis and general reading on AI. While I don’t yet feel ready to fine-tune a model from scratch, I now have a clearer roadmap and with a bit of tinkering, I believe I could get there.