<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xml:base="http://www.factminers.org"  xmlns:dc="http://purl.org/dc/elements/1.1/">
<channel>
 <title>FactMiners.org - Game Ideas</title>
 <link>http://www.factminers.org/tags/game-ideas</link>
 <description></description>
 <language>en</language>
<item>
 <title>Is FactMiners Like Metadatagam.es or Tiltfactor&#039;s Metadatagames.org?</title>
 <link>http://www.factminers.org/content/factminers-metadatagames-or-tiltfactors-metadatagamesorg</link>
 <description>&lt;div class=&quot;field field-name-field-image field-type-image field-label-hidden view-mode-rss view-mode-rss&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;figure class=&quot;clearfix field-item even&quot; rel=&quot;og:image rdfs:seeAlso&quot; resource=&quot;http://www.factminers.org/sites/default/files/styles/large/public/images/metadata_games_collage.png?itok=ncpChms2&quot;&gt;&lt;a href=&quot;http://www.factminers.org/sites/default/files/images/metadata_games_collage.png&quot;&gt;&lt;img typeof=&quot;foaf:Image&quot; class=&quot;image-style-large&quot; src=&quot;http://www.factminers.org/sites/default/files/styles/large/public/images/metadata_games_collage.png?itok=ncpChms2&quot; width=&quot;480&quot; height=&quot;360&quot; alt=&quot;&quot; /&gt;&lt;/a&gt;&lt;/figure&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;field field-name-field-tags field-type-taxonomy-term-reference field-label-hidden view-mode-rss view-mode-rss&quot;&gt;&lt;ul class=&quot;field-items&quot;&gt;&lt;li class=&quot;field-item even&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/tags/metamodeling&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;Metamodeling&lt;/a&gt;&lt;/li&gt;&lt;li class=&quot;field-item odd&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/tags/crowdsourcing&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;Crowdsourcing&lt;/a&gt;&lt;/li&gt;&lt;li class=&quot;field-item even&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/tags/game-ideas&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;Game Ideas&lt;/a&gt;&lt;/li&gt;&lt;li class=&quot;field-item odd&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/tags/citizen-science&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;Citizen Science&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div class=&quot;field field-name-body field-type-text-with-summary field-label-hidden view-mode-rss view-mode-rss&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot; property=&quot;content:encoded&quot;&gt;&lt;p&gt;Pressed for time? The short answer is, &lt;em&gt;&quot;Yes, in all the most general and important ways, and &#039;No&#039; in the specific way that we are designing our &lt;a href=&quot;http://lodlam.net/&quot;&gt;#LODLAM&lt;/a&gt; social-gaming platform around an &lt;a href=&quot;/content/neo4j-graphgist-design-docs-line&quot;&gt;&quot;embedded metamodel subgraph&quot; design pattern&lt;/a&gt; within an Open Source stack based on the &lt;a href=&quot;http://www.neo4j.org&quot;&gt;Neo4j graph database&lt;/a&gt; and &lt;a href=&quot;http://www.Structr.org&quot;&gt;Structr&lt;/a&gt;, the Neo4j-based next-gen CMS and web services framework. This innovative design and associated platform will take FactMiners into qualitatively new LAM social gaming territory.&quot;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;That quickly said, let me provide some backstory and detail in the first of this multi-part post. &lt;/p&gt;
&lt;p&gt;I recently had a &quot;tweet-versation&quot; with &lt;a href=&quot;http://www.miaridge.com/&quot;&gt;Mia Ridge&lt;/a&gt; – the Oxford-based, &lt;a href=&quot;http://www.miaridge.com/my-phd-research/&quot;&gt;PhD candidate in digital humanities&lt;/a&gt;, Open Uni (geospatial, crowdsourcing digitisation); Chair of &lt;a href=&quot;https://twitter.com/ukmcg&quot;&gt;@ukmcg&lt;/a&gt;; into code, UX, history and cultural heritage. I responded to her solicitation of input for her dissertation research survey (the link in her Tweet embedded here is live if you have input to share).&lt;/p&gt;
&lt;p&gt;I jumped at the chance to share our experience – especially after visiting her blog and reading her &lt;a href=&quot;http://openobjects.blogspot.com/2014/03/sharing-is-caring-keynote-enriching.html&quot;&gt;Big Ideas about the &lt;strong&gt;Participatory History Commons&lt;/strong&gt;&lt;/a&gt; as summarized in the linked post that served as a companion piece to her recent keynote at the &lt;a href=&quot;http://sharecare14.wordpress.com/&quot;&gt;Sharing Is Caring seminar&lt;/a&gt;.&lt;br /&gt;&lt;/p&gt;&lt;div class=&quot;image-right&quot;&gt;
&lt;blockquote class=&quot;twitter-tweet&quot;&gt;
&lt;p&gt;Run a crowdsourcing or participatory project in history or cultural heritage? Help me learn from your experience &lt;a href=&quot;http://t.co/sC3JjYIpTg&quot;&gt;http://t.co/sC3JjYIpTg&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;— Mia (@mia_out) &lt;a href=&quot;https://twitter.com/mia_out/statuses/471997201762512896&quot;&gt;May 29, 2014&lt;/a&gt;&lt;/p&gt;&lt;/blockquote&gt;
&lt;script async=&quot;&quot; src=&quot;//platform.twitter.com/widgets.js&quot; charset=&quot;utf-8&quot;&gt;&lt;/script&gt;&lt;blockquote class=&quot;twitter-tweet&quot; data-partner=&quot;tweetdeck&quot;&gt;
&lt;p&gt;&lt;a href=&quot;https://twitter.com/FactMiners&quot;&gt;@FactMiners&lt;/a&gt; &lt;a href=&quot;https://twitter.com/Softalk_Apple&quot;&gt;@Softalk_Apple&lt;/a&gt; by the way, have you seen &lt;a href=&quot;http://t.co/uS74wsVr5E&quot;&gt;http://t.co/uS74wsVr5E&lt;/a&gt; and &lt;a href=&quot;http://t.co/cmq3CUJG3c&quot;&gt;http://t.co/cmq3CUJG3c&lt;/a&gt;?&lt;/p&gt;
&lt;p&gt;— Mia (@mia_out) &lt;a href=&quot;https://twitter.com/mia_out/statuses/473811754204790784&quot;&gt;June 3, 2014&lt;/a&gt;&lt;/p&gt;&lt;/blockquote&gt;
&lt;script async=&quot;&quot; src=&quot;//platform.twitter.com/widgets.js&quot; charset=&quot;utf-8&quot;&gt;&lt;/script&gt;&lt;/div&gt;
&lt;p&gt;This post – to which I&#039;ll point in Tweet-reply – answers her inquiry and, hopefully, stimulates further conversations and potential collaborations...&lt;/p&gt;
&lt;p&gt;So &quot;...have we seen Metadatagam.es or Metadatagames.org?&quot;&lt;/p&gt;
&lt;p&gt;Oh absolutely! :-) We&#039;re familiar with the amazing fun work of both (Mia&#039;s own) &lt;a href=&quot;http://museumgam.es/&quot;&gt;Metadatagam.es&lt;/a&gt; and the Dartmouth-affiliated &lt;a href=&quot;http://www.tiltfactor.org/&quot;&gt;Tiltfactor&lt;/a&gt; people and projects, as exemplified by their work on &lt;a href=&quot;http://www.metadatagames.org/&quot;&gt;Metadatagames.org&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;From a Kindred Spirits perspective, all of us (and others doing similar Cultural Heritage social games) are in the same &lt;strong&gt;&quot;Serious Fun&quot; Big Tent&lt;/strong&gt;. Besides sharing many motivational values, there is lots of overlap at the user interaction (UX) and basic game design levels. That is, we&#039;re using game dynamics as a motive-force for crowdsourcing activity in response to traditional LAM (Library, Archives, and Museums) &lt;a href=&quot;http://en.wikipedia.org/wiki/Digital_curation&quot;&gt;digital curation&lt;/a&gt; challenges and opportunities. In this sense we all have much to learn and share with each other. &lt;/p&gt;
&lt;p&gt;Yet, and this is a good thing I think, each of these and other Cultural Heritage projects take a slightly different road toward the Valhalla land we all see on the horizon, and which Mia has conveniently and effectively characterized as the &lt;strong&gt;Participatory History Commons&lt;/strong&gt; as shown here diagrammatically (click to enlarge this image from &lt;a href=&quot;http://openobjects.blogspot.com/2014/03/sharing-is-caring-keynote-enriching.html&quot;&gt;Mia&#039;s PHC article&lt;/a&gt;):&lt;br /&gt;&lt;/p&gt;&lt;div class=&quot;image-solo&quot;&gt;&lt;a href=&quot;/sites/default/files/images/miaridge_Building_a_participatory_commons.png&quot;&gt;&lt;img src=&quot;/sites/default/files/images/miaridge_Building_a_participatory_commons.png&quot; width=&quot;800&quot; height=&quot;380&quot; alt=&quot;miaridge_Building_a_participatory_commons.png&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;p&gt;This diagram will be extremely valuable in helping us talk with each other about where and how we see similarities and differences between our projects&#039; goals and methods.&lt;/p&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description>
 <pubDate>Wed, 04 Jun 2014 16:43:20 +0000</pubDate>
 <dc:creator>Jim Salmons</dc:creator>
 <guid isPermaLink="false">25 at http://www.factminers.org</guid>
 <comments>http://www.factminers.org/content/factminers-metadatagames-or-tiltfactors-metadatagamesorg#comments</comments>
</item>
<item>
 <title>Introducing the &#039;Seeing Eye Child&#039; Robot Adoption Agency</title>
 <link>http://www.factminers.org/content/introducing-seeing-eye-child-robot-adoption-agency</link>
 <description>&lt;div class=&quot;field field-name-field-image field-type-image field-label-hidden view-mode-rss view-mode-rss&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;figure class=&quot;clearfix field-item even&quot; rel=&quot;og:image rdfs:seeAlso&quot; resource=&quot;http://www.factminers.org/sites/default/files/styles/large/public/images/auto-plant-robots.png?itok=BJ83LWSP&quot;&gt;&lt;a href=&quot;http://www.factminers.org/sites/default/files/images/auto-plant-robots.png&quot;&gt;&lt;img typeof=&quot;foaf:Image&quot; class=&quot;image-style-large&quot; src=&quot;http://www.factminers.org/sites/default/files/styles/large/public/images/auto-plant-robots.png?itok=BJ83LWSP&quot; width=&quot;480&quot; height=&quot;354&quot; alt=&quot;&quot; /&gt;&lt;/a&gt;&lt;/figure&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;field field-name-field-tags field-type-taxonomy-term-reference field-label-hidden view-mode-rss view-mode-rss&quot;&gt;&lt;ul class=&quot;field-items&quot;&gt;&lt;li class=&quot;field-item even&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/tags/ai-etc&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;AI etc.&lt;/a&gt;&lt;/li&gt;&lt;li class=&quot;field-item odd&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/tags/game-ideas&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;Game Ideas&lt;/a&gt;&lt;/li&gt;&lt;li class=&quot;field-item even&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/tags/robots&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;Robots!&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div class=&quot;field field-name-body field-type-text-with-summary field-label-hidden view-mode-rss view-mode-rss&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot; property=&quot;content:encoded&quot;&gt;&lt;p&gt;In the &lt;a href=&quot;/content/factminers-fact-cloud-british-library-image-collection&quot;&gt;first part of this informal proposal to creatively tap the newly-published British Library Image Collection&lt;/a&gt;, I imagined a plug-in game to be developed as part of the &lt;a href=&quot;/content/factminers-more-or-less-folksonomy&quot;&gt;&lt;strong&gt;FactMiners&lt;/strong&gt; social-game ecosystem&lt;/a&gt;. In this adult/child-interactive early-learning app, gameplayers collectively contribute to building a &lt;strong&gt;Fact Cloud&lt;/strong&gt; of &lt;em&gt;&quot;What&#039;s in this picture&quot;&lt;/em&gt; facts (elementary sentence-like assertions stored in a graph database) for the over one-million images recently uploaded to the &lt;a href=&quot;http://www.flickr.com/photos/britishlibrary&quot;&gt;Flikr Commons&lt;/a&gt;. &lt;strong&gt;Parents playing this new word/picture FactMiners plug-in game with their kids create the Fact Cloud &lt;/strong&gt;that becomes a vital resource for &lt;em&gt;a second new FactMiners game&lt;/em&gt;; the &lt;strong&gt;&#039;Seeing Eye Child&#039; Robot Adoption Agency&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;&#039;Seeing Eye Child&#039; Robot Adoption Agency&lt;/strong&gt; is similar to the &lt;a href=&quot;http://en.wikipedia.org/wiki/Tamagotchi&quot;&gt;&#039;Tamagotchi&#039; or &#039;digital pet&#039;&lt;/a&gt; gaming phenomena that hit in the mid-1990&#039;s and is still going strong. The difference here is that we harness our little cognitive learning machines – AKA FactMiners &lt;strong&gt;game players – to &#039;adopt&#039; a robot &lt;/strong&gt;(AKA a machine-learning program with some form of vision – image intake and analysis – capability) and help it to learn to see and understand its world. As an adoptive &#039;Seeing Eye Child&#039;, players take on the &lt;em&gt;roles&lt;/em&gt; of &lt;strong&gt;coach&lt;/strong&gt; and &lt;strong&gt;referee&lt;/strong&gt; for training sessions where adopted robots learn to see what&#039;s in the British Library images. &lt;/p&gt;
&lt;p&gt;The nimble-thinking among you have likely spotted the weak link in this proposed game design... &lt;strong&gt;robot supply&lt;/strong&gt;. &lt;/p&gt;
&lt;div class=&quot;image-left&quot;&gt;&lt;img src=&quot;/sites/default/files/images/FUTURAMA-Season-6B-Benderama_SECRAAremix.png&quot; width=&quot;520&quot; height=&quot;300&quot; alt=&quot;FUTURAMA-Season-6B-Benderama_SECRAAremix.png&quot; /&gt;&lt;/div&gt;
&lt;p&gt;Are we honestly to believe that the latest industrial robots ready to be brought on-line to a Ford or Toyota auto assembly line are in need of vision-training sessions with young kids mentoring their ability to recognize scenes from 17th-19th century book illustrations? Well, that is most certainly not the case. &lt;/p&gt;
&lt;p&gt;So how can the Fact Cloud creators – those playing the word/picture FactMiners game that creates the Fact Cloud descriptive companion to the British Library Image Collection – how can these players be motivated to create that Fact Cloud if its imagined great use in robot vision training turns out not to be a need at all? What kid is going to wait around a Robot Adoption Agency match-making server&#039;s &#039;waiting room&#039; for an adoptable robot that may never show up?&lt;/p&gt;
&lt;p&gt;Fortunately we can consider both the means and the ends of the Fact Cloud creation effort to answer such important questions. From a &#039;means value&#039; perspective, the image-describing FactMiners gameplay that creates the Fact Cloud is a fun, social, interactive learning activity. &lt;strong&gt;There is an immediate and personal motivation and value for parents, siblings, tutors, and teachers to help little learners build the British Library Image Collection Fact Cloud.&lt;/strong&gt; So even if the robot vision training need were to turn out to be an elusive future-imagining, the &#039;serious fun&#039; of building the British Library Image Collection Fact Cloud is time and energy well-spent in direct, interactive childhood and developmental educational activity.&lt;/p&gt;
&lt;p&gt;Having &#039;means-tested&#039; the effort to create the British Library Image Collection Fact Cloud, in my next post I will turn our attention to the &#039;ends&#039; test – &lt;a href=&quot;/content/finding-cv-stem-british-library-image-collection&quot;&gt;Will we really have a robot supply problem at the &#039;Seeing Eye Child&#039; Robot Adoption Agency?&lt;/a&gt;...&lt;/p&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description>
 <pubDate>Wed, 25 Dec 2013 22:03:17 +0000</pubDate>
 <dc:creator>Jim Salmons</dc:creator>
 <guid isPermaLink="false">8 at http://www.factminers.org</guid>
 <comments>http://www.factminers.org/content/introducing-seeing-eye-child-robot-adoption-agency#comments</comments>
</item>
<item>
 <title>A FactMiners&#039; Fact Cloud for the British Library Image Collection</title>
 <link>http://www.factminers.org/content/factminers-fact-cloud-british-library-image-collection</link>
 <description>&lt;div class=&quot;field field-name-field-image field-type-image field-label-hidden view-mode-rss view-mode-rss&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;figure class=&quot;clearfix field-item even&quot; rel=&quot;og:image rdfs:seeAlso&quot; resource=&quot;http://www.factminers.org/sites/default/files/styles/large/public/images/BritishLibrary_Flickr_images.png?itok=SoxfQp9_&quot;&gt;&lt;a href=&quot;http://www.factminers.org/sites/default/files/images/BritishLibrary_Flickr_images.png&quot;&gt;&lt;img typeof=&quot;foaf:Image&quot; class=&quot;image-style-large&quot; src=&quot;http://www.factminers.org/sites/default/files/styles/large/public/images/BritishLibrary_Flickr_images.png?itok=SoxfQp9_&quot; width=&quot;292&quot; height=&quot;233&quot; alt=&quot;&quot; /&gt;&lt;/a&gt;&lt;/figure&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;field field-name-field-tags field-type-taxonomy-term-reference field-label-hidden view-mode-rss view-mode-rss&quot;&gt;&lt;ul class=&quot;field-items&quot;&gt;&lt;li class=&quot;field-item even&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/tags/openculture&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;OpenCulture&lt;/a&gt;&lt;/li&gt;&lt;li class=&quot;field-item odd&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/tags/ai-etc&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;AI etc.&lt;/a&gt;&lt;/li&gt;&lt;li class=&quot;field-item even&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/tags/game-ideas&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;Game Ideas&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div class=&quot;field field-name-body field-type-text-with-summary field-label-hidden view-mode-rss view-mode-rss&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot; property=&quot;content:encoded&quot;&gt;&lt;p&gt;I was thrilled to read the announcement this week in the &lt;a href=&quot;http://britishlibrary.typepad.co.uk/digital-scholarship/index.html&quot;&gt;British Library Digital Scholarship blog&lt;/a&gt; about the &lt;a href=&quot;http://www.flickr.com/photos/britishlibrary&quot;&gt;Library&#039;s uploading to the Flickr Commons of over 1 million Public Domain images&lt;/a&gt; scanned from 17th, 18th, and 19th century books in the Library&#039;s physical collections. The Flickr image collection makes the individual images easily available for public use. Currently the meta-data about each image includes the most basic source information but nothing about the image itself. In the words of project tech lead Ben O&#039;Steen:&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;We may know which book, volume and page an image was drawn from, but we know nothing about a given image. Consider the image below. The title of the work may suggest the thematic subject matter of any illustrations in the book, but it doesn&#039;t suggest how colourful and arresting these images are.&lt;/p&gt;
&lt;div class=&quot;image-right&quot;&gt;&lt;a href=&quot;http://www.flickr.com/photos/britishlibrary/11075039705/&quot;&gt;&lt;img src=&quot;http://britishlibrary.typepad.co.uk/.a/6a00d8341c464853ef019b029b054d970b-800wi&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;p&gt;	&lt;a href=&quot;http://www.flickr.com/photos/britishlibrary/tags/imagesfrombook001012871/&quot;&gt;See more from this book&lt;/a&gt;: &quot;Historia de las Indias de Nueva-España y islas de Tierra Firme...&quot; (1867)&lt;/p&gt;
&lt;p&gt;	We plan to launch a crowdsourcing application at the beginning of next year, to help describe what the images portray. Our intention is to use this data to train automated classifiers that will run against the whole of the content. The data from this will be as openly licensed as is sensible (given the nature of crowdsourcing) and the code, as always, will be under an open license.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;Ben went on to explain, &quot;Which brings me to the point of this release. &lt;strong&gt;We are looking for new, inventive ways to navigate, find and display these &#039;unseen illustrations&#039;&lt;/strong&gt;.&quot;&lt;/p&gt;
&lt;p&gt;Well, Ben&#039;s challenge got me thinking... &lt;strong&gt;What would be the value of creating a FactMiners&#039; Fact Cloud Companion to the British Libary Public Domain Image Collection?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;And that&#039;s when I had my latest &quot;Eureka Moment&quot; about why the &lt;a href=&quot;/content/factminers-more-or-less-folksonomy&quot;&gt;FactMiners social-game ecosystem&lt;/a&gt; is such a compelling idea (at least to me and a few others at this point :-) ). First, let me briefly describe what a Fact Cloud Companion would look like for the British Library Image Collection before exploring why this is such an exciting and potentially important idea.&lt;/p&gt;
&lt;h2&gt;A FactMiners Fact Cloud for Images: What?&lt;/h2&gt;
&lt;p&gt;When Ben laments that the Library&#039;s image collection does not know anything about the content of the individual images, I believe he &#039;undersold&#039; that statement by alluding to the metadata not informing us how colorful or arresting this image is. But there is a much more significant truth underlying his statement.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Images are incredible &quot;compressed storage&quot; of all the &quot;facts&quot; (verbal assertions) that we instantly understand when we humans look at an image.&lt;/strong&gt; The image Ben referenced above of the man in a ceremonial South American tribal regalia is chuck full of &quot;facts&quot; like:&lt;/p&gt;
&lt;ul&gt;&lt;li&gt;The man is wearing a mask.&lt;/li&gt;
&lt;li&gt;The man is wearing a blue tunic.&lt;/li&gt;
&lt;li&gt;The man is holding a long, pointed, wavy stick.&lt;/li&gt;
&lt;li&gt;The man has a feathered shield in his left hand.&lt;/li&gt;
&lt;li&gt;The man is standing on a fringed rug.&lt;/li&gt;
&lt;li&gt;The man has a beaded bracelet on his right arm.&lt;/li&gt;
&lt;/ul&gt;&lt;p&gt;I&#039;ve written briefly about &lt;a href=&quot;/about_graphs_and_factmining&quot;&gt;how an Open Source graph database, like Neo4j, is an ideal technology for capturing FactMiners&#039; Fact Clouds&lt;/a&gt;. So I won&#039;t belabor the point by drilling down here on these example &#039;image facts&#039; to the level of graph data insertions or related queries. Suffice to say that the means are readily available to design and capture a reasonable and useful graph database of facts/assertions about what is &quot;seen&quot; in the &quot;unseen illustrations&quot; of the British Library image collection.&lt;/p&gt;
&lt;p&gt;Rather, I want to move on quickly to the &quot;A-ha Moment&quot; I had about why creating a Fact Cloud Companion to the British Library Image Collection could be a Very Good Thing.&lt;/p&gt;
&lt;h2&gt;A FactMiners Fact Cloud for Images: Why?&lt;/h2&gt;
&lt;p&gt;Every time we look at an image, our brains decompress that in an &quot;explosion of facts.&quot; By bringing image collections into the FactMiners&#039; &quot;serious play arena&quot; we are, in effect, capturing that &quot;human image decompression&quot; process as a sharable artifact rather than it being a transient individual cognitive event. In other words, &lt;strong&gt;every child goes through the learning process of &quot;seeing&quot; what&#039;s in a picture&lt;/strong&gt;. When these &quot;little learning machines&quot; do a proportion of that natural childhood learning activity by playing FactMiners at the British Library Image Collection, we get a truly interesting &#039;by-product&#039; in the Fact Cloud Companion.&lt;/p&gt;
&lt;div class=&quot;image-left&quot;&gt;&lt;img src=&quot;/sites/default/files/images/danger_will_robinson.jpg&quot; width=&quot;286&quot; height=&quot;362&quot; alt=&quot;danger_will_robinson.jpg&quot; /&gt;&lt;/div&gt;
&lt;p&gt;Beyond the obvious use of a Fact Cloud for folksonomy-class applications supporting source collection public and researcher access, &lt;strong&gt;a FactMiners Fact Cloud Companion of the British Library Public Domain Image Collection would be an invaluable resource&lt;/strong&gt; for that &lt;em&gt;new emerging museum and archive visitor base...&lt;/em&gt; &lt;strong&gt;robots.&lt;/strong&gt; Well, not so much the fully anthropomorphized walking/talking robots, at least not so much just yet. I&#039;m thinking here more like &lt;strong&gt;machine-learning programs&lt;/strong&gt;, specifically those with any form of &#039;image vision&#039; capability – whether by crude file/data &#039;input&#039; or real-time vision sensors.&lt;/p&gt;
&lt;p&gt;Upon entering the British Library Image Collection, our robot/machine-learning-program visitors would find a rich &#039;playground&#039; in which to hone their vision capabilities. All those Fact Cloud &#039;facts&#039; about what is &#039;seen&#039; in the collection&#039;s previously &#039;unseen images&#039; would be available at machine-thinking/learning speed to answer the litany of questions – &quot;What&#039;s that?&quot;, &quot;Is that a snake?&quot;, &quot;Is that boy under the table?&quot; – questions that a machine-learning program might use to refine its vision capabilities.&lt;/p&gt;
&lt;p&gt;So while the primary intent of the project is making these images available for Open Culture sharing and use, there may be some equally valuable side effects of this project. &lt;strong&gt;The British Library Image Collection and its Fact Cloud Companion could become a &quot;go-to&quot; stop for any vision-capable robot or machine-learning program that aspires to better understand the world it sees.&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;A FactMiners Fact Cloud for Images: How?&lt;/h2&gt;
&lt;p&gt;As the good folks at the British Library well know, just getting a good folksonomy social-tagging resource developed for such a huge collection is itself no small task. This is why museums and archives, like the British Library and those collaborating in &lt;a href=&quot;http://www.steve.museum/&quot;&gt;the steve project&lt;/a&gt;, are turning to crowdsourcing methods to get the &#039;heavy-lifting&#039; of these tasks done. &lt;strong&gt;Crowdsourcing goes hand-in-hand with gamification&lt;/strong&gt; in this regard. If we can&#039;t pay you to help us out, at least we can make the work fun, right?&lt;/p&gt;
&lt;div class=&quot;image-right&quot;&gt;&lt;img src=&quot;/sites/default/files/images/FactMiners_kid_playing_app.png&quot; width=&quot;460&quot; height=&quot;316&quot; alt=&quot;FactMiners_kid_playing_app.png&quot; /&gt;&lt;/div&gt;
&lt;p&gt;Well, you don&#039;t have to think too hard to realize that if creating a folksonomy is a big chore, then creating a useful Fact Cloud representing at least a good chunk of the &#039;seen&#039; in the previously &#039;unseen illustrations&#039; of the British Library Image Collection is a Way Too Big Chore. And this might be true. But I think that there is some uniquely wonderful &#039;harness-able labor&#039; to be tapped in this regard. &lt;/p&gt;
&lt;p&gt;I know we can make &lt;strong&gt;a really fun app where parents and older folks can help kids learn by playing; building fact-by-fact a valuable resource at the British Library, for one&lt;/strong&gt;. A learning child is a torrent of cognitive processing. Let a stream of that raw learning energy run through the FactMiners game at the British Library Image Collection and you&#039;d have critical mass in a Fact Cloud faster than you can say, &quot;Danger, Will Robinson!&quot;&lt;/p&gt;
&lt;p&gt;And where might this lead? Well, where this all might lead Big Picture wise is beyond the scope of this post. But I can see it leading to a new, previously unimagined game to add to the mix of social games available to FactMiners players... and it&#039;s a bit of a doozy. :-)&lt;/p&gt;
&lt;p&gt;If the British Library creates a FactMiners Fact Cloud Companion to its Image Collection, and if that Fact Cloud becomes useful to robots (machine-learning programs) as a vision-learning resource, I can see where we would want to add a &lt;strong&gt;&#039;Seeing Eye Child&#039; Robot Adoption Agency Game&lt;/strong&gt; to the FactMiners game plug-ins. What would that game be like?&lt;/p&gt;
&lt;div class=&quot;image-left&quot;&gt;&lt;img src=&quot;/sites/default/files/images/FactMiners_robot_training_kids.png&quot; width=&quot;541&quot; height=&quot;409&quot; alt=&quot;FactMiners_robot_training_kids.png&quot; /&gt;&lt;/div&gt;
&lt;p&gt;Well, as good as an Image Collection Fact Cloud might be to learn from, and as smart as a machine-learning program might be as a learner, a robot&#039;s learning to see isn&#039;t likely to be a fully automated process. So we &lt;strong&gt;create a game where one or more kids &#039;adopt&#039; a robot/machine-learning program to help it learn&lt;/strong&gt;. In this case, the FactMiners player would gain experience points, badges, etc. by being available for &#039;vision training&#039; sessions with the adopted robot. The FactMiners player is, in effect, the referee and coach to the robot as it learns to see. &lt;/p&gt;
&lt;p&gt;It doesn&#039;t take much imagination to see how this could lead to schools fielding teams in &lt;strong&gt;contests to take a &#039;stock&#039; robot/machine-learning-program and train it to enter various vision recognition challenges&lt;/strong&gt;. And when I let my imagination run with these ideas, it gets very interesting real fast. But any run, even of one&#039;s imagination, starts with a first step.&lt;/p&gt;
&lt;p&gt;Will we get a chance to make a Fact Cloud Companion to the British Library Image Collection? I don&#039;t know. This week the British Library took &lt;a href=&quot;http://britishlibrary.typepad.co.uk/digital-scholarship/2013/12/a-million-first-steps.html&quot;&gt;a million first steps&lt;/a&gt; toward making their vast digital image collection available to all for free. Perhaps the first step of posting this article will lead us on a path where we will have some serious fun working with the Library to help kids who help robots learn to see and understand our world.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;--Jim Salmons--&lt;br /&gt;
Cedar Rapids, Iowa USA&lt;/em&gt;&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;&lt;strong&gt;Update:&lt;/strong&gt; An encouraging reply of exploratory interest from the good folks at the British Library Labs has juiced my motivation to further &lt;a href=&quot;/content/introducing-seeing-eye-child-robot-adoption-agency&quot;&gt;explore the potential for the &#039;Seeing Eye Child&#039; Robot Adoption Agency&lt;/a&gt; as a FactMiners plug-in game.&lt;/p&gt;&lt;/blockquote&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description>
 <pubDate>Sun, 15 Dec 2013 21:21:20 +0000</pubDate>
 <dc:creator>Jim Salmons</dc:creator>
 <guid isPermaLink="false">9 at http://www.factminers.org</guid>
 <comments>http://www.factminers.org/content/factminers-fact-cloud-british-library-image-collection#comments</comments>
</item>
</channel>
</rss>
<h1>Uncaught exception thrown in shutdown function.</h1><p>PDOException: SQLSTATE[42000]: Syntax error or access violation: 1142 DELETE command denied to user &amp;#039;factminersAdmin&amp;#039;@&amp;#039;localhost&amp;#039; for table &amp;#039;semaphore&amp;#039;: DELETE FROM {semaphore} 
WHERE  (value = :db_condition_placeholder_0) ; Array
(
    [:db_condition_placeholder_0] =&amp;gt; 16012131916417fdff3bf4a6.37586675
)
 in lock_release_all() (line 269 of /var/www/webadmin/data/www/factminers.org/html/includes/lock.inc).</p><hr />