mastodon.ie is one of the many independent Mastodon servers you can use to participate in the fediverse.
Irish Mastodon - run from Ireland, we welcome all who respect the community rules and members.

Administered by:

Server stats:

1.6K
active users

#embeddings

1 post1 participant0 posts today
Svenja Guhr<p>Using <a href="https://fedihum.org/tags/doc2vec" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>doc2vec</span></a> <a href="https://fedihum.org/tags/embeddings" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>embeddings</span></a> and <a href="https://fedihum.org/tags/TextSimilarity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TextSimilarity</span></a>, she analyzes how canonized works influence later literary production while highlighting texts that have been forgotten, marginalized, or overlooked.<br><a href="https://fedihum.org/tags/Canon" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Canon</span></a> vs. <a href="https://fedihum.org/tags/Counter" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Counter</span></a>-canon becomes a scale shaped by retrospective, cultural, and temporal markers. <a href="https://fedihum.org/tags/CLS" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CLS</span></a></p>
Mathieu Jacomy<p>Ah, my latest tool, just out of the oven! Just in time for my Summer break... It's called *Vandolie*. It's for high school students, but it may work for you as well. I will let you discover it by yourself.</p><p>👉 <a href="https://jacomyma.github.io/vandolie/en/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">jacomyma.github.io/vandolie/en/</span><span class="invisible"></span></a></p><p>It's like a mini CorTexT for teenagers, if you know that tool. But it runs entirely in the browser.</p><p>Entirely localized in Danish.</p><p>Consider it a beta version. Usable, but feel free to file GitHub issues for feedback &amp; bugs.</p><p><a href="https://mas.to/tags/CSSH" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CSSH</span></a> <a href="https://mas.to/tags/DistantReading" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DistantReading</span></a> <a href="https://mas.to/tags/embeddings" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>embeddings</span></a></p>
Christian Drumm 🇪🇺🧗🚵<p>Playing with <span class="h-card" translate="no"><a href="https://mastodon.social/@duckdb" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>duckdb</span></a></span>, <a href="https://mastodon.social/tags/embeddings" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>embeddings</span></a> and <a href="https://mastodon.social/tags/skiplists" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>skiplists</span></a>. Got exercise to understand how <a href="https://mastodon.social/tags/RAG" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RAG</span></a> works under the hood.</p>
➴➴➴Æ🜔Ɲ.Ƈꭚ⍴𝔥єɼ👩🏻‍💻<p>Okay, Back of the napkin math:<br> - There are probably 100 million sites and 1.5 billion pages worth indexing in a <a href="https://lgbtqia.space/tags/search" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>search</span></a> engine<br> - It takes about 1TB to <a href="https://lgbtqia.space/tags/index" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>index</span></a> 30 million pages.<br> - We only care about text on a page.</p><p>I define a page as worth indexing if:<br> - It is not a FAANG site<br> - It has at least one referrer (no DD Web)<br> - It's active</p><p>So, this means we need 40TB of fast data to make a good index for the internet. That's not "runs locally" sized, but it is nonprofit sized.</p><p>My size assumptions are basically as follows:<br> - <a href="https://lgbtqia.space/tags/URL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>URL</span></a><br> - <a href="https://lgbtqia.space/tags/TFIDF" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TFIDF</span></a> information<br> - Text <a href="https://lgbtqia.space/tags/Embeddings" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Embeddings</span></a><br> - Snippet </p><p>We can store an index for 30kb. So, for 40TB we can store an full internet index. That's about $500 in storage.</p><p>Access time becomes a problem. TFIDF for the whole internet can easily fit in ram. Even with <a href="https://lgbtqia.space/tags/quantized" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>quantized</span></a> embeddings, you can only fit 2 million per GB in ram. </p><p>Assuming you had enough RAM it could be fast: TF-IDF to get 100 million candidated, <a href="https://lgbtqia.space/tags/FAISS" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FAISS</span></a> to sort those, load snippets dynamically, potentially modify rank by referers etc.</p><p>6 128 MG <a href="https://lgbtqia.space/tags/Framework" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Framework</span></a> <a href="https://lgbtqia.space/tags/desktops" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>desktops</span></a> each with 5tb HDs (plus one raspberry pi to sort the final condidates from the six machines) is enough to replace <a href="https://lgbtqia.space/tags/Google" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Google</span></a>. That's about $15k. </p><p>In two to three years this will be doable on a single machine for around $3k.</p><p>By the end of the decade it should be able to be run as an app on a powerful desktop</p><p>Three years after that it can run on a <a href="https://lgbtqia.space/tags/laptop" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>laptop</span></a>.</p><p>Three years after that it can run on a <a href="https://lgbtqia.space/tags/cellphone" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cellphone</span></a>.</p><p>By #2040 it's a background process on your cellphone.</p>
Markus Eisele<p>From Strings to Semantics: Comparing Text with Java, Quarkus, and Embeddings<br>Learn how to build an AI-powered text similarity service using Quarkus, LangChain4j, and local embedding models. <br><a href="https://myfear.substack.com/p/java-quarkus-text-embeddings-similarity" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">myfear.substack.com/p/java-qua</span><span class="invisible">rkus-text-embeddings-similarity</span></a><br><a href="https://mastodon.online/tags/Java" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Java</span></a> <a href="https://mastodon.online/tags/Quarkus" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Quarkus</span></a> <a href="https://mastodon.online/tags/Embeddings" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Embeddings</span></a> <a href="https://mastodon.online/tags/Ollama" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ollama</span></a> <a href="https://mastodon.online/tags/LangChain4j" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LangChain4j</span></a></p>
JMLR<p>'Variance-Aware Estimation of Kernel Mean Embedding', by Geoffrey Wolfer, Pierre Alquier.</p><p><a href="http://jmlr.org/papers/v26/23-0161.html" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">http://</span><span class="ellipsis">jmlr.org/papers/v26/23-0161.ht</span><span class="invisible">ml</span></a> <br> <br><a href="https://sigmoid.social/tags/embeddings" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>embeddings</span></a> <a href="https://sigmoid.social/tags/embedding" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>embedding</span></a> <a href="https://sigmoid.social/tags/empirical" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>empirical</span></a></p>
FIZ ISE Research Group<p>We are very happy that our colleage <span class="h-card" translate="no"><a href="https://sigmoid.social/@GenAsefa" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>GenAsefa</span></a></span> has contributed the chapter on "Neurosymbolic Methods for Dynamic Knowledge Graphs" for the newly published Handbook on Neurosymbolic AI and Knowledge Graphs together with Mehwish Alam and Pierre-Henri Paris.</p><p>Handbook: <a href="https://ebooks.iospress.nl/doi/10.3233/FAIA400" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">ebooks.iospress.nl/doi/10.3233</span><span class="invisible">/FAIA400</span></a><br>our own chapter on arxive: <a href="https://arxiv.org/abs/2409.04572" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">arxiv.org/abs/2409.04572</span><span class="invisible"></span></a></p><p><a href="https://sigmoid.social/tags/neurosymbolicAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>neurosymbolicAI</span></a> <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://sigmoid.social/tags/generativeAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>generativeAI</span></a> <a href="https://sigmoid.social/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> <a href="https://sigmoid.social/tags/knowledgegraphs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>knowledgegraphs</span></a> <a href="https://sigmoid.social/tags/semanticweb" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>semanticweb</span></a> <a href="https://sigmoid.social/tags/embeddings" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>embeddings</span></a> <a href="https://sigmoid.social/tags/graphembeddings" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>graphembeddings</span></a></p>
FIZ ISE Research Group<p>Poster from our colleague <span class="h-card" translate="no"><a href="https://blog.epoz.org/" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>epoz</span></a></span> from UGent-IMEC Linked Data &amp; Solid course. "Exploding Mittens - Getting to grips with huge SKOS datasets" on semantic embeddings enhanced SPARQL queries for ICONCLASS data.<br>Congrats for the 'best poster' award ;-) </p><p>poster: <a href="https://zenodo.org/records/14887544" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">zenodo.org/records/14887544</span><span class="invisible"></span></a><br>iconclass on GitHub: <a href="https://github.com/iconclass" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">github.com/iconclass</span><span class="invisible"></span></a></p><p><a href="https://sigmoid.social/tags/rdf2vec" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>rdf2vec</span></a> <a href="https://sigmoid.social/tags/bert" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>bert</span></a> <a href="https://sigmoid.social/tags/llm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llm</span></a> <a href="https://sigmoid.social/tags/embeddings" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>embeddings</span></a> <a href="https://sigmoid.social/tags/iconclass" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>iconclass</span></a> <a href="https://sigmoid.social/tags/semanticweb" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>semanticweb</span></a> <a href="https://sigmoid.social/tags/lod" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>lod</span></a> <a href="https://sigmoid.social/tags/linkeddata" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>linkeddata</span></a> <a href="https://sigmoid.social/tags/knowledgegraphs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>knowledgegraphs</span></a> <a href="https://sigmoid.social/tags/dh" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>dh</span></a> <span class="h-card" translate="no"><a href="https://nfdi.social/@nfdi4culture" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>nfdi4culture</span></a></span> <span class="h-card" translate="no"><a href="https://wisskomm.social/@fiz_karlsruhe" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>fiz_karlsruhe</span></a></span> <a href="https://sigmoid.social/tags/iconclass" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>iconclass</span></a></p>
Mark Igra<p>Is there a consensus process or good paper on state of the art on using <a href="https://sciences.social/tags/embeddings" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>embeddings</span></a> &amp; <a href="https://sciences.social/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> to do the kinds of things that were being done with topic models? I imagine for tasks with pre-defined classifications, prompts are sufficient, but any recommendations for identifying latent classes? After reading the paper below I think I'll want to use local models. <a href="https://sciences.social/tags/machinelearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>machinelearning</span></a> <a href="https://drive.google.com/file/d/1wNDIkMZfAGoh4Oaojrgll9SPg3eT-YXz/view" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">drive.google.com/file/d/1wNDIk</span><span class="invisible">MZfAGoh4Oaojrgll9SPg3eT-YXz/view</span></a></p>
Andrew Wooldridge 🐲<p>Embeddings are cool <a href="https://technicalwriting.dev/data/embeddings.html" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">technicalwriting.dev/data/embe</span><span class="invisible">ddings.html</span></a> <a href="https://social.lol/tags/llm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llm</span></a> <a href="https://social.lol/tags/embeddings" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>embeddings</span></a></p>
JMLR<p>'Recursive Estimation of Conditional Kernel Mean Embeddings', by Ambrus Tamás, Balázs Csanád Csáji.</p><p><a href="http://jmlr.org/papers/v25/23-0168.html" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">http://</span><span class="ellipsis">jmlr.org/papers/v25/23-0168.ht</span><span class="invisible">ml</span></a> <br> <br><a href="https://sigmoid.social/tags/embeddings" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>embeddings</span></a> <a href="https://sigmoid.social/tags/supervised" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>supervised</span></a> <a href="https://sigmoid.social/tags/estimation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>estimation</span></a></p>
Ian K Tindale<p><span>My prediction of the day<br><br>There will be a resurgence of kerfuffle and bustle around Virtual Reality again<br><br>Why? I predict that there will be a direction ahead for multimodal LLMs and their agentic systems to be able to produce 3D scenarios which represent a sort of token-memory bound ‘known world’ in which the ‘perception’ of the LLM can ‘project’ things it ‘knows’ about into this imaginary world, giving eg Blender files that render the output (and even as moving pictures, kinematic sequences etc, resulting in movie or video playout)<br><br>If a multimodal LLM with Blender output can make movies, it can also make virtual reality movies, and even turn this into a live scenario rather than playout, and then it turns into an input as well as output medium<br><br>Your (and others) actions and decisions within a VR space (hosted or provided or run by the LLM system) act as contributing input and in fact can be tokenised as an embedding itself<br><br>If that’s the case, anything within the ‘real’ to ‘virtual’ interface space can be tokenised into an embedding for that scenario, so ‘meaning’ might come to be extracted, regarding position and movement and why those ducks are now all moving over there instead of staying over here and what’s the problem with being here, what’s compelling them to be there, is there a meaning, what’s the meaning, nothing happens for no reason<br><br>Anyway, AI systems that are capable of producing a virtual world which we can step into to experience the resulting output of the AI system, and at the same time we proffer input directly verbally, and behaviourally, will bring a whole load of interest in VR back again so we’ll be hearing about VR all over again<br><br></span><a href="https://toot.pikopublish.ing/tags/AI" rel="nofollow noopener" target="_blank">#AI</a><span> </span><a href="https://toot.pikopublish.ing/tags/LLM" rel="nofollow noopener" target="_blank">#LLM</a><span> </span><a href="https://toot.pikopublish.ing/tags/MultimodalLLM" rel="nofollow noopener" target="_blank">#MultimodalLLM</a><span> </span><a href="https://toot.pikopublish.ing/tags/AgenticAI" rel="nofollow noopener" target="_blank">#AgenticAI</a><span> </span><a href="https://toot.pikopublish.ing/tags/VR" rel="nofollow noopener" target="_blank">#VR</a><span> </span><a href="https://toot.pikopublish.ing/tags/VirtualReality" rel="nofollow noopener" target="_blank">#VirtualReality</a><span> </span><a href="https://toot.pikopublish.ing/tags/Blender" rel="nofollow noopener" target="_blank">#Blender</a><span> </span><a href="https://toot.pikopublish.ing/tags/embeddings" rel="nofollow noopener" target="_blank">#embeddings</a><span> </span><a href="https://toot.pikopublish.ing/tags/meaning" rel="nofollow noopener" target="_blank">#meaning</a><span> </span><a href="https://toot.pikopublish.ing/tags/ducks" rel="nofollow noopener" target="_blank">#ducks</a></p>
Harald Sack<p>In 2013, Mikolov et al. (from Google) published word2vec, a neural network based framework to learn distributed representations of words as dense vectors in continuous space, aka word embeddings.</p><p>T. Mikolov et al. (2013). Efficient Estimation of Word Representations in Vector Space. arXiv:1301.3781<br><a href="https://arxiv.org/abs/1301.3781" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">arxiv.org/abs/1301.3781</span><span class="invisible"></span></a></p><p><a href="https://sigmoid.social/tags/HistoryOfAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HistoryOfAI</span></a> <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://sigmoid.social/tags/ise2024" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ise2024</span></a> <a href="https://sigmoid.social/tags/lecture" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>lecture</span></a> <a href="https://sigmoid.social/tags/distributionalsemantics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>distributionalsemantics</span></a> <a href="https://sigmoid.social/tags/wordembeddings" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>wordembeddings</span></a> <a href="https://sigmoid.social/tags/embeddings" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>embeddings</span></a> <span class="h-card" translate="no"><a href="https://fedihum.org/@sourisnumerique" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>sourisnumerique</span></a></span> <span class="h-card" translate="no"><a href="https://sigmoid.social/@enorouzi" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>enorouzi</span></a></span> <span class="h-card" translate="no"><a href="https://sigmoid.social/@fizise" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>fizise</span></a></span></p>
Sören Auer 🇪🇺🇺🇦<p>Ask your (research) question against 76 Million scientific articles: <a href="https://ask.orkg.org" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">ask.orkg.org</span><span class="invisible"></span></a></p><p><span class="h-card" translate="no"><a href="https://openbiblio.social/@orkg" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>orkg</span></a></span> ASK (Assistant for Scientific Knowledge) uses vector <a href="https://mstdn.social/tags/embeddings" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>embeddings</span></a> to find the most relevant papers and an open-source <a href="https://mstdn.social/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> to synthesize the answer for you.</p>
Harald Sack<p>Interesting new survey paper on ontology embeddings:<br>Jiaoyan Chen, Olga Mashkova, Fernando Zhapa-Camacho, Robert Hoehndorf, Yuan He, Ian Horrocks, Ontology Embedding: A Survey of Methods, Applications and Resources</p><p><a href="https://arxiv.org/abs/2406.10964" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">arxiv.org/abs/2406.10964</span><span class="invisible"></span></a></p><p><a href="https://sigmoid.social/tags/ontologies" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ontologies</span></a> <a href="https://sigmoid.social/tags/embeddings" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>embeddings</span></a> <a href="https://sigmoid.social/tags/knowledgegraphs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>knowledgegraphs</span></a></p>
:rss: Hacker News<p>Show HN: AI Agent System to Analyze ArXiv AI Papers<br><a href="https://www.stack-ai.com/form/83883819-33d3-443c-88db-18106c9226da/ba81c6e6-b8af-4a97-b37a-174502daf8c4/6661deb730cbde865feba7f7" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">stack-ai.com/form/83883819-33d</span><span class="invisible">3-443c-88db-18106c9226da/ba81c6e6-b8af-4a97-b37a-174502daf8c4/6661deb730cbde865feba7f7</span></a><br><a href="https://rss-mstdn.studiofreesia.com/tags/ycombinator" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ycombinator</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/Stack_AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Stack_AI</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/Enterprise_AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Enterprise_AI</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/AI_applications" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI_applications</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/RAG" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RAG</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/Fine_tunning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Fine_tunning</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/Embeddings" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Embeddings</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/Custom_ChatGPT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Custom_ChatGPT</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/AI_Applications" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI_Applications</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/Generative_AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Generative_AI</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/Hospitality" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Hospitality</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/Retail" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Retail</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/Marketing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Marketing</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/Advertising" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Advertising</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/Healthcare" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Healthcare</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/Real_Estate" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Real_Estate</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/Finance" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Finance</span></a> <a href="https://rss-mstdn.studiofreesia.com/tags/Insurance" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Insurance</span></a></p>
JMLR<p>'An Embedding Framework for the Design and Analysis of Consistent Polyhedral Surrogates', by Jessie Finocchiaro, Rafael M. Frongillo, Bo Waggoner.</p><p><a href="http://jmlr.org/papers/v25/22-0743.html" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">http://</span><span class="ellipsis">jmlr.org/papers/v25/22-0743.ht</span><span class="invisible">ml</span></a> <br> <br><a href="https://sigmoid.social/tags/surrogates" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>surrogates</span></a> <a href="https://sigmoid.social/tags/embeddings" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>embeddings</span></a> <a href="https://sigmoid.social/tags/surrogate" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>surrogate</span></a></p>
o19s<p>A substantial portion of unstructured data contains multimodal elements such as text, tables, and images. Praveen and Hajer from Amazon will explore the utilization of <a href="https://fosstodon.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> and multimodal <a href="https://fosstodon.org/tags/embeddings" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>embeddings</span></a> for <a href="https://fosstodon.org/tags/RAG" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RAG</span></a> at Haystack US <a href="https://haystackconf.com/us2024/talk-14/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">haystackconf.com/us2024/talk-1</span><span class="invisible">4/</span></a></p>
Starbeamrainbowlabs<p>New blog post time! A new blog post series explaining basic AI concepts. Got something you want explained? Do let me know!</p><p>Defining AI: Word embeddings</p><p><a href="https://starbeamrainbowlabs.com/blog/article.php?article=posts%2F543-defining-ai-word-embeddings.html" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">starbeamrainbowlabs.com/blog/a</span><span class="invisible">rticle.php?article=posts%2F543-defining-ai-word-embeddings.html</span></a></p><p><a href="https://fediscience.org/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://fediscience.org/tags/embeddings" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>embeddings</span></a> <a href="https://fediscience.org/tags/nlp" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>nlp</span></a> <a href="https://fediscience.org/tags/blogpost" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>blogpost</span></a> <a href="https://fediscience.org/tags/iamslightlyburntoutfromthesiswriting" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>iamslightlyburntoutfromthesiswriting</span></a></p>
Ferdinando Simonetti <p><span>[ A story from Shaw Talebi on Medium ]<br>Read this story from Shaw Talebi on Medium: </span><a href="https://towardsdatascience.com/text-embeddings-classification-and-semantic-search-8291746220be" rel="nofollow noopener" target="_blank">https://towardsdatascience.com/text-embeddings-classification-and-semantic-search-8291746220be</a> <a href="https://misskey.social/tags/LLM" rel="nofollow noopener" target="_blank">#LLM</a> <a href="https://misskey.social/tags/Embeddings" rel="nofollow noopener" target="_blank">#Embeddings</a></p>