<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>ai &amp;mdash; Minimalist EdTech</title>
    <link>https://minimalistedtech.org/tag:ai</link>
    <description>Less is more in technology and in education</description>
    <pubDate>Sat, 02 May 2026 02:19:18 +0000</pubDate>
    
    <item>
      <title>Mistaken Oracles in the Future of AI</title>
      <link>https://minimalistedtech.org/mistaken-oracles-in-the-future-of-ai?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[&#xA;&#xA;It&#39;s popular among AI folks to think in terms of phases of AI, of which the current and most reachable target is likely &#34;oracular AI&#34;. Tools like ChatGPT are one manifestation of this, a form of question and answer system that can return answers that will soon seem superhuman in terms of breadth of content and flexibility of style. I suspect most educators don&#39;t think about this framework of AI as oracle much, but we should, because it explains a lot both about the current hype cycle around large language models and can help us gain critical footing with where to go next.&#xA;&#xA;!--more--&#xA;&#xA;From the lesswrong site earlier, here&#39;s how they describe oracular AI (on their overall perspective, definitely take in the full set of ideas there):&#xA;  An Oracle AI is a regularly proposed solution to the problem of developing Friendly AI. It is conceptualized as a super-intelligent system which is designed for only answering questions, and has no ability to act in the world. The name was first suggested by Nick Bostrom.&#xA;&#xA;Oracular here is an de-historicized ideal of the surface function of an oracle, made into an engineering system where the oracle just answers questions based on superhuman sources or means but &#34;has no ability to act in the world.&#34; The contrast is with our skynet future (choose your own AI gone wild movie example), where AI has a will and once connected to the means will most certainly wipe out all of humanity, whether for its own ends or as the only logical way to complete its preprogrammed (and originally innocuous, in most cliches) goals. &#xA;&#xA;Two things to note here:&#xA;This is an incredibly narrow view of what makes AI ethical, focusing especially on the output, with little attention to the path to get there. I note in passing that much criticism of current AI is less with the outputs and more with the modes of exploitation and human capital and labor that go into producing said outputs. &#xA;This is completely backwards view of oracles.&#xA;&#xA;The second point matters to me more, primarily because it&#39;s a recurring pattern in technological discussions. The term &#34;oracle&#34; has here been reduced to transactional function in a way that flattens its meaning to the point that it evokes the opposite of the historical reality. It&#39;s not just marketing pablum, but here a selective memory with significant consequences, a metaphor to frame the future. Metaphors like this construct an imaginary world from the scaffolding of the original domain. When we impoverish or selectively depict that original domain, when we distort it, we delude ourselves. It is not just a pedantic mistake but a flaw of thinking that makes more acceptable a view that we should treat with a bit more circumspection. What&#39;s more, the cues to suspicion are right there in front of us. The fullness of the idea matters, because we can see that the view of oracular AI as a friendly AI is a gross distortion, almost comically ignoring the wisdom that could be gained by considering the complex reality that is (and was) oracular practice.&#xA;&#xA;(Since the term &#34;oracle&#34; generally looks back to ancient practices, for those who want some scholarly grounding, check out Sarah Iles-Johnston, Ancient Greek Divination, Michael Flower, The Seer in Ancient Greece, Nissinen&#39;s Ancient Prophecy, etc etc. or in other eras and with electronic resources access, e.g. Oxford Bibliographies on prophecy in the Renaissance.)&#xA;&#xA;Long story made very short, oracles are not friendly question and answer machines. They are, in all periods and cultures, highly biased players in religio-political gamesmanship. In the case of the most famous perhaps, the Pythian oracle in Ancient Greece, the answers were notoriously difficult to interpret correctly (though the evidence for literary representations of riddling vs. actual delivery of riddling messages is more complicated). Predicting the future is a tricky business, and oracular institutions and individuals were by no means disinterested players. They looked after themselves an their own interests. They often maintained a veneer of neutrality in order to prosper. &#xA;&#xA;That is all to say that oracularism is in fact a great metaphor for current and near future AI, but only if we historicize the term fully. I expect current AI to work very much like oracles, in all their messiness. They will be biased, subtly so in some cases. They will be sources from unclear methods, trusted and yet suspect at the same time. And they will depend above all on humans to make meaning from nonsense. &#xA;&#xA;This last point, that the answers spouted by oracles might be as nonsensical as they are sensical, is vital. We lose track amidst the current noise around whether generative AI produces things that are correct or incorrect, copied or original, creative or stochastic boilerplate. The more important point is that humans will fill in the gaps and make sense of whatever they are given. We are the ones turning nonsense into sense, seeing meaning in a string of token probabilities, wanting to take as true something that might potentially be a grand edifice of bullshittery. That hasn&#39;t changed since the answer-givers were Pythian priestesses. &#xA;&#xA;Oracular AI is a great metaphor. But it doesn&#39;t say what its proponents think it says. We humans are the ones who get to decide on whether it is meaningful or meaningless. &#xA;&#xA;#chatgpt #ai #edtech #aiineducation #edtech #education]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://i.snap.as/5lUNSFVp.jpg" alt=""/></p>

<p>It&#39;s popular among AI folks to think in terms of phases of AI, of which the current and most reachable target is likely <a href="https://www.lesswrong.com/tag/oracle-ai">“oracular AI”</a>. Tools like ChatGPT are one manifestation of this, a form of question and answer system that can return answers that will soon seem superhuman in terms of breadth of content and flexibility of style. I suspect most educators don&#39;t think about this framework of AI as oracle much, but we should, because it explains a lot both about the current hype cycle around large language models and can help us gain critical footing with where to go next.</p>



<p>From the lesswrong site earlier, here&#39;s how they describe oracular AI (on their overall perspective, definitely take in the full set of ideas there):
&gt; An <strong>Oracle AI</strong> is a regularly proposed solution to the problem of developing <a href="https://wiki.lesswrong.com/wiki/Friendly_AI">Friendly AI</a>. It is conceptualized as a super-intelligent system which is designed for only answering questions, and has no ability to act in the world. The name was first suggested by <a href="https://www.lesswrong.com/tag/nick-bostrom">Nick Bostrom</a>.</p>

<p>Oracular here is an de-historicized ideal of the surface function of an oracle, made into an engineering system where the oracle just answers questions based on superhuman sources or means but “has no ability to act in the world.” The contrast is with our skynet future (choose your own AI gone wild movie example), where AI has a will and once connected to the means will most certainly wipe out all of humanity, whether for its own ends or as the only logical way to complete its preprogrammed (and originally innocuous, in most cliches) goals.</p>

<p>Two things to note here:
1. This is an incredibly narrow view of what makes AI ethical, focusing especially on the output, with little attention to the path to get there. I note in passing that much criticism of current AI is less with the outputs and more with the modes of exploitation and human capital and labor that go into producing said outputs.
2. This is completely backwards view of oracles.</p>

<p>The second point matters to me more, primarily because it&#39;s a recurring pattern in technological discussions. The term “oracle” has here been reduced to transactional function in a way that flattens its meaning to the point that it evokes the opposite of the historical reality. It&#39;s not just marketing pablum, but here a selective memory with significant consequences, a metaphor to frame the future. Metaphors like this construct an imaginary world from the scaffolding of the original domain. When we impoverish or selectively depict that original domain, when we distort it, we delude ourselves. It is not just a pedantic mistake but a flaw of thinking that makes more acceptable a view that we should treat with a bit more circumspection. What&#39;s more, the cues to suspicion are right there in front of us. The fullness of the idea matters, because we can see that the view of oracular AI as a friendly AI is a gross distortion, almost comically ignoring the wisdom that could be gained by considering the complex reality that is (and was) oracular practice.</p>

<p>(Since the term “oracle” generally looks back to ancient practices, for those who want some scholarly grounding, check out Sarah Iles-Johnston, <em>Ancient Greek Divination</em>, Michael Flower, <em>The Seer in Ancient Greece</em>, Nissinen&#39;s <em>Ancient Prophecy</em>, etc etc. or in other eras and with electronic resources access, e.g. Oxford Bibliographies on <a href="https://www.oxfordbibliographies.com/display/document/obo-9780195399301/obo-9780195399301-0501.xml">prophecy in the Renaissance</a>.)</p>

<p>Long story made very short, oracles are not friendly question and answer machines. They are, in all periods and cultures, highly biased players in religio-political gamesmanship. In the case of the most famous perhaps, the Pythian oracle in Ancient Greece, the answers were notoriously difficult to interpret correctly (though the evidence for literary representations of riddling vs. actual delivery of riddling messages is more complicated). Predicting the future is a tricky business, and oracular institutions and individuals were by no means disinterested players. They looked after themselves an their own interests. They often maintained a veneer of neutrality in order to prosper.</p>

<p>That is all to say that oracularism is in fact a <em>great</em> metaphor for current and near future AI, but only if we historicize the term fully. I expect current AI to work very much like oracles, in all their messiness. They will be biased, subtly so in some cases. They will be sources from unclear methods, trusted and yet suspect at the same time. And they will depend above all on humans to make meaning from nonsense.</p>

<p>This last point, that the answers spouted by oracles might be as nonsensical as they are sensical, is vital. We lose track amidst the current noise around whether generative AI produces things that are correct or incorrect, copied or original, creative or stochastic boilerplate. The more important point is that humans will fill in the gaps and make sense of whatever they are given. We are the ones turning nonsense into sense, seeing meaning in a string of token probabilities, wanting to take as true something that might potentially be a grand edifice of bullshittery. That hasn&#39;t changed since the answer-givers were Pythian priestesses.</p>

<p>Oracular AI is a great metaphor. But it doesn&#39;t say what its proponents think it says. We humans are the ones who get to decide on whether it is meaningful or meaningless.</p>

<p><a href="https://minimalistedtech.org/tag:chatgpt" class="hashtag"><span>#</span><span class="p-category">chatgpt</span></a> <a href="https://minimalistedtech.org/tag:ai" class="hashtag"><span>#</span><span class="p-category">ai</span></a> <a href="https://minimalistedtech.org/tag:edtech" class="hashtag"><span>#</span><span class="p-category">edtech</span></a> <a href="https://minimalistedtech.org/tag:aiineducation" class="hashtag"><span>#</span><span class="p-category">aiineducation</span></a> <a href="https://minimalistedtech.org/tag:edtech" class="hashtag"><span>#</span><span class="p-category">edtech</span></a> <a href="https://minimalistedtech.org/tag:education" class="hashtag"><span>#</span><span class="p-category">education</span></a></p>
]]></content:encoded>
      <guid>https://minimalistedtech.org/mistaken-oracles-in-the-future-of-ai</guid>
      <pubDate>Wed, 18 Jan 2023 17:42:43 +0000</pubDate>
    </item>
    <item>
      <title>Humans in the Loop and Agency</title>
      <link>https://minimalistedtech.org/humans-in-the-loop-and-agency?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[human in the loop, made with DALL-E&#xA;&#xA;Any new technology or tool, no matter how shiny its newness, can help students experiment with how technology mediates thought. I suspect that&#39;s the least problematic use of generative &#34;AI&#34; and large language models in the short term. One reason I think of this kind of activity as play or experimentation is that if you go much further with it, make it a habit, or take it for granted, then the whole enterprise becomes much more suspect. Most consumer-facing applications showing off large language models right now are variations of a human in the loop system. (ChatGPT exposes a particularly frictionless experience for interacting with the underlying language model.)&#xA;&#xA;A key question for any human in the loop systems is that of agency. Who&#39;s the architect and who is the cog? For education in particular, it might seem that treating a tool like chatGPT as a catalyst for critical inquiry puts humans back in control. But I&#39;m not sure that&#39;s the case. And I&#39;m not sure it&#39;s always easy to tell the difference.&#xA;&#xA;!--more--&#xA;&#xA;One obvious reason this is not the case with chatGPT specifically is that OpenAI&#39;s interest in making chatGPT available is very different from public perception and adoption. To the public, it&#39;s a viral event, a display of the promise and/or peril of recent NLP revolutions. But OpenAI is fairly clear in their fine print that they are making this publicly available in order to refine the model, test for vulnerabilities, gather validated training data, and, I would imagine, also get a sense for potential markets. It is not different from any other big tech service insofar as the human in the loop is commodity more so than agent. We are perhaps complacent with this relationship to our technology, that our ruts of use and trails of data provide value back to the companies making those tools, but it is particularly important in thinking through educational value. ChatGPT is a slick implementation of developing language models and everything people pump into it is crowdsourced panning for gold delivered into the waiting data vaults of OpenAI.&#xA;(For a harsher critique of the Effective Altruism ideology that may be part of OpenAI&#39;s corporate DNA, see https://irisvanrooijcogsci.com/2023/01/14/stop-feeding-the-hype-and-start-resisting/)&#xA;&#xA;Set that all aside for a moment. If we take the core human in the loop interaction of prompting the language model and receiving a probabilistic path through the high dimensional mix of weights, a path which looks to human eyes like coherent sentences and ideas, where exactly is the agency? We supply a beginning, from which subsequent probabilities can be calculated. Though that feels like control, I wonder how long before it&#39;s the case that that machine dictates our behavior? As is the case with email or text or phones, how long before we have changed our way of thinking in order to think in terms of prompts? &#xA;&#xA;For example, one particularly effective method with ChatGPT in its current incarnation is to start by giving it a scenario or role (e.g. &#34;You are a psychologist and the following is a conversation with a patient&#34;) and then feed in a fair amount of content followed by a question. (I gave a more elaborate example of this scenario setting earlier here.) That context setting allows the model to hone in on more appropriate paths, matching style and content more closely to our human expectations. I expect working with these tools over time will nudge people into patterns of expression that are subtly both natural language but also stylized in ways that work most effectively for querying machines. As was the case for search, the habit of looking things up reinforces particular ways of thinking through key words -- of assuming that everything is keywordable -- and ways of asking questions. &#xA;&#xA;Most of the conversation around generative tools has been about control more than agency. As a set of tools whose functioning is, to a certain extent, still unauditable and whose creation relies on datasets so massive as to stymie most people&#39;s existing level of data literacy, generative AI is a black box for the majority of users. So teachers worry: how do we control this? who controls this? how do we know what is happening? That is perhaps no different than most high tech devices or softwares. For education, the stakes are different however. &#xA;&#xA;Learning requires students gain a sense of agency in the world. Effective learning builds off of growing agency, the ability to exercise one&#39;s will and see the results. That is, in one sense, the journey of education, gradually gaining some purchase on ideas, language, concepts, tools, and one&#39;s environment. That growth requires clear sense of who is in control and works best amidst intellectual and emotional security, but there&#39;s more to it. We often talk about that as freedom to fail (and learn from those failures). Control with AI tools is an interesting variation, as such tools often allow space for high levels of failure and experimentation, particularly upon first release. ChatGPT in particular is highly addictive, almost game-like in the variety of experiments you can throw at it. But with whom does the agency lie? Is feeding the machine actual agency?&#xA;&#xA;Hence my concern. Human in the loop systems can provide a false sense of agency. Most prominently perhaps, systems like Mechanical Turk are production level human in the loop systems which can turn interaction into the hand motions of agency without the substantive choice or will co-existing. But those particular kinds of tools aren&#39;t meant for human learning. They are purely transactional, labor for pay. AI-driven education on the other hand, labeled with such seemingly human-centric terms like &#34;personalized learning&#34;, will be human in the loop systems. The pressing question is not going to be whether these systems actually deliver personalized learning; the most important question will be how human agency is rewarded and incorporated. Will students be cogs or creators? And will it be obvious to students where they stand in the loop?&#xA;&#xA;#chatgpt #education #teaching #ai #edtech&#xA;&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://i.snap.as/ByUkC3Nt.png" alt="human in the loop, made with DALL-E"/></p>

<p>Any new technology or tool, no matter how shiny its newness, can help students experiment with how technology mediates thought. I suspect that&#39;s the least problematic use of generative “AI” and large language models in the short term. One reason I think of this kind of activity as play or experimentation is that if you go much further with it, make it a habit, or take it for granted, then the whole enterprise becomes much more suspect. Most consumer-facing applications showing off large language models right now are variations of a human in the loop system. (ChatGPT exposes a particularly frictionless experience for interacting with the underlying language model.)</p>

<p>A key question for any human in the loop systems is that of agency. Who&#39;s the architect and who is the cog? For education in particular, it might seem that treating a tool like chatGPT as a catalyst for critical inquiry puts humans back in control. But I&#39;m not sure that&#39;s the case. And I&#39;m not sure it&#39;s always easy to tell the difference.</p>



<p>One obvious reason this is not the case with chatGPT specifically is that OpenAI&#39;s interest in making chatGPT available is very different from public perception and adoption. To the public, it&#39;s a viral event, a display of the promise and/or peril of recent NLP revolutions. But OpenAI is fairly clear in their fine print that they are making this publicly available in order to refine the model, test for vulnerabilities, gather validated training data, and, I would imagine, also get a sense for potential markets. It is not different from any other big tech service insofar as the human in the loop is commodity more so than agent. We are perhaps complacent with this relationship to our technology, that our ruts of use and trails of data provide value back to the companies making those tools, but it is particularly important in thinking through educational value. ChatGPT is a slick implementation of developing language models and everything people pump into it is crowdsourced panning for gold delivered into the waiting data vaults of OpenAI.
(For a harsher critique of the Effective Altruism ideology that may be part of OpenAI&#39;s corporate DNA, see <a href="https://irisvanrooijcogsci.com/2023/01/14/stop-feeding-the-hype-and-start-resisting/">https://irisvanrooijcogsci.com/2023/01/14/stop-feeding-the-hype-and-start-resisting/</a>)</p>

<p>Set that all aside for a moment. If we take the core human in the loop interaction of prompting the language model and receiving a probabilistic path through the high dimensional mix of weights, a path which looks to human eyes like coherent sentences and ideas, where exactly is the agency? We supply a beginning, from which subsequent probabilities can be calculated. Though that feels like control, I wonder how long before it&#39;s the case that that machine dictates our behavior? As is the case with email or text or phones, how long before we have changed our way of thinking in order to think in terms of prompts?</p>

<p>For example, one particularly effective method with ChatGPT in its current incarnation is to start by giving it a scenario or role (e.g. “You are a psychologist and the following is a conversation with a patient”) and then feed in a fair amount of content followed by a question. (I gave a more elaborate example of this scenario setting earlier <a href="https://minimalistedtech.com/pretending-to-teach">here</a>.) That context setting allows the model to hone in on more appropriate paths, matching style and content more closely to our human expectations. I expect working with these tools over time will nudge people into patterns of expression that are subtly both natural language but also stylized in ways that work most effectively for querying machines. As was the case for search, the habit of looking things up reinforces particular ways of thinking through key words — of assuming that everything is keywordable — and ways of asking questions.</p>

<p>Most of the conversation around generative tools has been about control more than agency. As a set of tools whose functioning is, to a certain extent, still unauditable and whose creation relies on datasets so massive as to stymie most people&#39;s existing level of data literacy, generative AI is a black box for the majority of users. So teachers worry: how do we control this? who controls this? how do we know what is happening? That is perhaps no different than most high tech devices or softwares. For education, the stakes are different however.</p>

<p><strong>Learning requires students gain a sense of agency in the world.</strong> Effective learning builds off of growing agency, the ability to exercise one&#39;s will and see the results. That is, in one sense, the journey of education, gradually gaining some purchase on ideas, language, concepts, tools, and one&#39;s environment. That growth requires clear sense of who is in control and works best amidst intellectual and emotional security, but there&#39;s more to it. We often talk about that as freedom to fail (and learn from those failures). Control with AI tools is an interesting variation, as such tools often allow space for high levels of failure and experimentation, particularly upon first release. ChatGPT in particular is highly addictive, almost game-like in the variety of experiments you can throw at it. But with whom does the agency lie? Is feeding the machine actual agency?</p>

<p>Hence my concern. <em>Human in the loop systems can provide a false sense of agency.</em> Most prominently perhaps, systems like Mechanical Turk are production level human in the loop systems which can turn interaction into the hand motions of agency without the substantive choice or will co-existing. But those particular kinds of tools aren&#39;t meant for human learning. They are purely transactional, labor for pay. AI-driven education on the other hand, labeled with such seemingly human-centric terms like “personalized learning”, will be human in the loop systems. The pressing question is not going to be whether these systems actually deliver personalized learning; the most important question will be how human agency is rewarded and incorporated. Will students be cogs or creators? And will it be obvious to students where they stand in the loop?</p>

<p><a href="https://minimalistedtech.org/tag:chatgpt" class="hashtag"><span>#</span><span class="p-category">chatgpt</span></a> <a href="https://minimalistedtech.org/tag:education" class="hashtag"><span>#</span><span class="p-category">education</span></a> <a href="https://minimalistedtech.org/tag:teaching" class="hashtag"><span>#</span><span class="p-category">teaching</span></a> <a href="https://minimalistedtech.org/tag:ai" class="hashtag"><span>#</span><span class="p-category">ai</span></a> <a href="https://minimalistedtech.org/tag:edtech" class="hashtag"><span>#</span><span class="p-category">edtech</span></a></p>
]]></content:encoded>
      <guid>https://minimalistedtech.org/humans-in-the-loop-and-agency</guid>
      <pubDate>Sun, 15 Jan 2023 06:14:05 +0000</pubDate>
    </item>
    <item>
      <title>Edtech rant of the day: AI that isn&#39;t really AI</title>
      <link>https://minimalistedtech.org/edtech-rant-of-the-day-ai-that-isnt-really-ai?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[&#xA;Not Artificial Intelligence, aka &#34;AI&#34;&#xA;&#xA;The overuse of the term &#34;AI&#34; to market technology products has been out of control for some time. Educational technologies are no different. More and more I&#39;ve been seeing &#34;AI&#34; products in edtech that are little more than slick visualizations wrapped around basic arithmetic.&#xA;&#xA;Things that are not, by themselves or by default, &#34;AI&#34;:&#xA;&#xA;simple and obvious things expressed as percentages, e.g. percentage of students participating in some activity&#xA;any graph and visualization that is not a line or bar chart&#xA;numbers&#xA;huge dashboards full of numbers&#xA;huge dashboards full of numbers with fancy labels of the form &#34;Engagement Score&#34; or &#34;[StupidTradmarkedName] Score&#34;(tm)&#xA;&#xA;Make it stop. Seriously, machine learning, deep learning and everything that might legitimately be called &#34;AI&#34; are interesting and awesome and powerful, problematic and potentially biased and also full of possibility.  All of that is worth talking about and fair game to market as some branch of artificial intelligence; but selling elementary math and week 1 of Intro to Statistics as AI is just ridiculous. &#xA;&#xA;#edtech #AI #edtechrant]]&gt;</description>
      <content:encoded><![CDATA[<p><img src="https://i.snap.as/RUpbSq6n.jpg" alt=""/>
Not Artificial Intelligence, aka “AI”</p>

<p>The overuse of the term “AI” to market technology products has been out of control for some time. Educational technologies are no different. More and more I&#39;ve been seeing “AI” products in edtech that are little more than slick visualizations wrapped around basic arithmetic.</p>

<p>Things that are not, by themselves or by default, “AI”:</p>
<ul><li>simple and obvious things expressed as percentages, e.g. percentage of students participating in some activity</li>
<li>any graph and visualization that is not a line or bar chart</li>
<li>numbers</li>
<li>huge dashboards full of numbers</li>
<li>huge dashboards full of numbers with fancy labels of the form “Engagement Score” or “[StupidTradmarkedName] Score”™</li></ul>

<p>Make it stop. Seriously, machine learning, deep learning and everything that might legitimately be called “AI” are interesting and awesome and powerful, problematic and potentially biased and also full of possibility.  All of that is worth talking about and fair game to market as some branch of artificial intelligence; but selling elementary math and week 1 of Intro to Statistics as AI is just ridiculous.</p>

<p><a href="https://minimalistedtech.org/tag:edtech" class="hashtag"><span>#</span><span class="p-category">edtech</span></a> <a href="https://minimalistedtech.org/tag:AI" class="hashtag"><span>#</span><span class="p-category">AI</span></a> <a href="https://minimalistedtech.org/tag:edtechrant" class="hashtag"><span>#</span><span class="p-category">edtechrant</span></a></p>
]]></content:encoded>
      <guid>https://minimalistedtech.org/edtech-rant-of-the-day-ai-that-isnt-really-ai</guid>
      <pubDate>Mon, 01 Feb 2021 14:11:23 +0000</pubDate>
    </item>
  </channel>
</rss>