I hoped the “Leaving the Sinking Zazuko Ship” blog post would be the last post about Zazuko, but then they stole my code.
I hoped the “Leaving the Sinking Zazuko Ship” blog post would be the last post about Zazuko, but then they stole my code.
While working on RDF/JS specifications, I fulfilled two roles.
As the chair, I had the goal that every feasible requirement submitted could be implemented based on what is written in the specification.
Being the author and maintainer of RDF-Ext, I contribute requirements to the group as well. In this blog post, I want to provide you with a closer look at one core idea I pushed forward - an idea that, until now, was spread over multiple discussions and comments on different GitHub issues.
After almost 9 years, I left Zazuko, the company I co-founded, for good at the end of June. Like I share my open-source libraries with you, I was hoping you could benefit from the lesson I learned about how a dark triad personality ruined a company.
Graph traversing with plain RDF/JS objects is no fun. Keeping track of the path walked while traversing a graph adds another layer of complexity. With Grapoi, graph traversing is fun again and makes code more readable.
I guess it was not only my social media stream that was flooded with GPT4 news in the last few days. The results are impressive but not surprising to me. A combination of events, which are more important from my point of view, didn’t get that much attention: The LLM LLaMA runs on a Raspberry Pi, and Stanford students fine-tuned the model for $100.
This year, RDF-Ext will turn ten years old. In this blog post, I will give a brief overview of recent updates, show the current plans, and explain how you can help improve RDF-Ext.
One of my projects encountered a performance problem with the available SHACL engines. Besides, I needed to know which triples had been processed (coverage/fragments), so I implemented a new SHACL engine in JavaScript from scratch.
In my last blog post, I wrote about how to detect LLM output, with the conclusion that machine-learning approaches are not the solution. Today I want to show how the recent news “Man beats machine at Go in human victory over AI” is related to that topic.
Recently the question of whether and how text generated by large language models (LLM) can be distinguished from human text showed up in the W3C Semantic Web mailing list. A solution that seems obvious at first glance would be to use the same technology for this purpose. There are already some tools that do exactly that. This blog post lists some of them. But that’s not a long-term solution. Let me explain why:
Once your SPARQL queries get bigger, you may stumble over the problem that you have duplicate parts of the query or have to deal with performance impacts. Federated queries are affected by some more constraints. The SPARQL Named Query proposal allows the explicit reuse of sub-queries. This blog post will describe the problem in more detail, how the SPARQL Named Query can solve it, and how you can try it already.