<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xml:base="http://www.factminers.org"  xmlns:dc="http://purl.org/dc/elements/1.1/">
<channel>
 <title>FactMiners.org - AI etc.</title>
 <link>http://www.factminers.org/tags/ai-etc</link>
 <description></description>
 <language>en</language>
<item>
 <title>Inside the FactMiners&#039; Brain - Rainman Meet Sherlock</title>
 <link>http://www.factminers.org/content/inside-factminers-brain-rainman-meet-sherlock</link>
 <description>&lt;div class=&quot;field field-name-field-image field-type-image field-label-hidden view-mode-rss view-mode-rss&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;figure class=&quot;clearfix field-item even&quot;&gt;&lt;a href=&quot;http://www.factminers.org/sites/default/files/images/factminers_brain_image.png&quot;&gt;&lt;img typeof=&quot;foaf:Image&quot; class=&quot;image-style-large&quot; src=&quot;http://www.factminers.org/sites/default/files/styles/large/public/images/factminers_brain_image.png?itok=psQpUKMf&quot; width=&quot;416&quot; height=&quot;480&quot; alt=&quot;&quot; /&gt;&lt;/a&gt;&lt;/figure&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;field field-name-field-tags field-type-taxonomy-term-reference field-label-hidden view-mode-rss view-mode-rss&quot;&gt;&lt;ul class=&quot;field-items&quot;&gt;&lt;li class=&quot;field-item even&quot;&gt;&lt;a href=&quot;/tags/cognitive-computing&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;Cognitive Computing&lt;/a&gt;&lt;/li&gt;&lt;li class=&quot;field-item odd&quot;&gt;&lt;a href=&quot;/tags/metamodeling&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;Metamodeling&lt;/a&gt;&lt;/li&gt;&lt;li class=&quot;field-item even&quot;&gt;&lt;a href=&quot;/tags/linked-open-data&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;Linked Open Data&lt;/a&gt;&lt;/li&gt;&lt;li class=&quot;field-item odd&quot;&gt;&lt;a href=&quot;/tags/deep-learning&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;Deep Learning&lt;/a&gt;&lt;/li&gt;&lt;li class=&quot;field-item even&quot;&gt;&lt;a href=&quot;/tags/ai-etc&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;AI etc.&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div class=&quot;field field-name-body field-type-text-with-summary field-label-hidden view-mode-rss view-mode-rss&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot; property=&quot;content:encoded&quot;&gt;&lt;p&gt;NOTE: In case you missed it, here is a link to a screencast of &lt;a href=&quot;http://watch.neo4j.org/video/105266385?mkt_tok=3RkMMJWWfF9wsRogvqXPZKXonjHpfsX86%2BktUa63lMI%2F0ER3fOvrPUfGjI4HT8NjI%2BSLDwEYGJlv6SgFTbfGMadv1LgNXRQ%3D&quot;&gt;Kenny Bastani&#039;s webinar about using the Neo4j graph database in text classification and related Deep Learning applications&lt;/a&gt;. It&#039;s a fascinating introduction to some original work Kenny is doing that leverages the strengths of a property graph, in this case Neo4j, to do some Deep Learning text-mining and document classification. &lt;/p&gt;
&lt;p&gt;This article is about what we are going to try to do with Kenny&#039;s new Graphify extension for Neo4j. And a big &quot;Thank you!&quot; and kudos to Kenny for kickstarting activity around this important topic within the Neo4j community.&lt;/p&gt;
&lt;h2&gt;Some Thoughts About Thinking&lt;br /&gt;&lt;/h2&gt;
&lt;p&gt;You would be hard-pressed to go through any formal education where you did not learn about our &quot;two brains&quot; – left and right hemispheres, verbal/non-verbal, creative/literal, the conscious/subconscious, long-term/short-term, self/other, etc. All these various perspectives remind us that how we think, as humans, is a complex yin-yang cognitive process. Whatever works well to help us understand ourselves and the world around us in some cases, does poorly helping us in others, and vice versa. So we&#039;ve cleverly evolved the &quot;wetware&quot; to do both and, in one of our brains&#039; most truly amazing feats, to provide some kind of highly effective, real-time integration of the these multiple perspectives.&lt;/p&gt;
&lt;p&gt;One of the most intriguing distinctions to consider when attempting to model human cognitive processes (let&#039;s settle for calling it &quot;smart software&quot; to avoid going too far into pure ResearchSpeak) is to look at the role of subconscious and conscious processing. Some things are either so voluminous and detailed -- basic perception, for example -- that we would bore ourselves to death and slow our thinking processes to a crawl if they ran through our conscious, mostly verbal, cognitive processes. Some other aspects of our thinking -- e.g. things that produce an &quot;A-ha!&quot; conscious moment of discovery -- require the &quot;hands off&quot; focus of subconscious processing. Without such cloistered incubation opportunities, our overbearing conscious mental processes can too easily derail an otherwise breakthrough thought.&lt;/p&gt;
&lt;div class=&quot;image-left&quot;&gt;&lt;img src=&quot;/sites/default/files/images/2013-02-06-LeftBrainRightBrain21.jpg&quot; width=&quot;420&quot; height=&quot;310&quot; alt=&quot;2013-02-06-LeftBrainRightBrain21.jpg&quot; /&gt;&lt;/div&gt;
&lt;p&gt;So it should not surprise us that software analogs of (something akin to) our own cognitive processing will benefit from providing a similar strategy of &quot;complementary opposites.&quot; We should expect to find some real design opportunities for &quot;smart software&quot; by providing a rough approximation of this subconscious/conscious distinction as we move from an application-centric development mindset to a more appropriate agent-centric design mindset. Exploring how smart software might incorporate this &quot;two-cyclinder thinking engine&quot; is one of the &quot;serious fun&quot; R&amp;amp;D initiatives at FactMiners.org. &lt;/p&gt;
&lt;p&gt;We&#039;re active in the Neo4j community because FactMiners is exploring the unique, expressive nature of graph database technology to model both how &quot;subconscious&quot; cognitive processing (e.g. the NLP-based stuff of &lt;a href=&quot;http://info.neo4j.com/0904-register.html?_ga=1.155401086.479957737.1409240112&quot;&gt;Kenny&#039;s text classification webinar&lt;/a&gt;) can be integrated with &quot;conscious&quot; cognitive processing (e.g. our &lt;a href=&quot;/content/neo4j-graphgist-design-docs-line&quot;&gt;metamodel-subgraph GraphGists&lt;/a&gt; that are more akin to &quot;mind maps&quot;). Our belief is that such a software design strategy can lead to a synergistic result that is greater than the sum of what these simulated cognitive processes can contribute independently. To allude to popular culture, our research asks:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;How can we get both the Rainman-like, obsessive-compulsive, bureaucratic, ruthlessly-detailed part of our subconscious processing to work in concert with the Sherlock Holmes-like, logical, deductive, constructive part of our &quot;wetware&quot;? &lt;/em&gt; &lt;/p&gt;
&lt;h2&gt;The Rainman Part - Kenny Bastani&#039;s Text Classification Blog/Webinar&lt;br /&gt;&lt;/h2&gt;
&lt;div class=&quot;image-right&quot;&gt;&lt;img src=&quot;/sites/default/files/images/rainman_poster.png&quot; width=&quot;225&quot; height=&quot;337&quot; alt=&quot;rainman_poster.png&quot; /&gt;&lt;/div&gt;
&lt;p&gt;Kenny Bastani&#039;s webinar this Thursday, &lt;em&gt;&lt;a href=&quot;http://info.neo4j.com/0904-register.html?_ga=1.155401086.479957737.1409240112&quot;&gt;&quot;Using Neo4j for Document Classification&quot;&lt;/a&gt;&lt;/em&gt; will provide a great live demonstration of the kind of relentless, detail-oriented, largely subconscious aspect of our human cognitive process. Kenny&#039;s recent blog post, &lt;em&gt;&lt;a href=&quot;http://www.kennybastani.com/2014/08/using-graph-database-for-deep-learning-text-classification.html&quot;&gt;&quot;Using a Graph Database for Deep Learning Text Classification,&quot;&lt;/a&gt;&lt;/em&gt; is provided as a webinar supplement and gives a good introduction (with links) to the Deep Learning ideas and methods employed in his latest Open Source project, &lt;strong&gt;Graphify&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/kbastani/graphify&quot;&gt;Graphify is a Neo4j unmanaged extension&lt;/a&gt; adding &lt;strong&gt;NLP-based (Natural Language Processing) document and text classification features&lt;/strong&gt; to the Neo4j graph database using &lt;em&gt;graph-based hierarchical pattern recognition&lt;/em&gt;. As Kenny describes in his blog post:&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;&lt;em&gt;&quot;Graphify gives you a mechanism to train natural language parsing models that extract features of a text using deep learning. When training a model to recognize the meaning of a text, you can send an article of text with a provided set of labels that describe the nature of the text. Over time the natural language parsing model in Neo4j will grow to identify those features that optimally disambiguate a text to a set of classes.&quot;&lt;/em&gt; (Kenny Bastani, &lt;a href=&quot;http://www.kennybastani.com/2014/08/using-graph-database-for-deep-learning-text-classification.html&quot;&gt;full post&lt;/a&gt;)&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;When you read the rest of Kenny&#039;s blog post you will get a very quick and informative introduction to the &lt;strong&gt;Vector Space Model&lt;/strong&gt; for &lt;strong&gt;Deep Learning&lt;/strong&gt; representation and analysis of text documents. The algebraic model underlying Kenny&#039;s Graphify Neo4j extension is just the kind of Rainman-like, obsessive, detail-oriented processing that is representative of the subconscious side of our cognitive processing.&lt;/p&gt;
&lt;p&gt;If you read the above description of Graphify closely, you will see the opportunity for synergy and integration between Graphify&#039;s &quot;subconscious&quot; processing and the more &quot;conscious&quot; processing reflected in my GraphGists exploring the &quot;self-descriptive&quot; Neo4j graph database.&lt;/p&gt;
&lt;h2&gt;The Sherlock Part - FactMiners&#039; Metamodel Subgraph GraphGists&lt;br /&gt;&lt;/h2&gt;
&lt;div class=&quot;image-left&quot;&gt;&lt;img src=&quot;/sites/default/files/images/Sherlock_Holmes_Cumberbatch.png&quot; width=&quot;260&quot; height=&quot;423&quot; alt=&quot;Sherlock_Holmes_Cumberbatch.png&quot; /&gt;&lt;/div&gt;
&lt;p&gt;Imagine if you were able to sit down for tea with the fictional Sherlock Holmes. We hand him a paper and pen, and then ask for a description of the particulars of his latest case. Sherlock would surely resort to sketches in the form of a graph diagram or something easily mapable to a graph representation. Graph semantics are &quot;elementary&quot; and flexibly extensible -- properties that Sherlock would surely appreciate.&lt;/p&gt;
&lt;p&gt;I started exploring this &quot;conscious&quot; cognitive process side of graph database application design in the first two parts of my GraphGist design document series, &lt;em&gt;&quot;The &#039;Self-Descriptive&#039; Neo4j Graph Database: Metamodel Subgraphs in the FactMiners Social-Game Ecosystem.&quot;&lt;/em&gt; In the longer and more detailed &lt;a href=&quot;http://gist.neo4j.org/?7817558&quot;&gt;second part of this GraphGist&lt;/a&gt;, I explored how an embedded metamodel subgraph can be used to model a &quot;Fact Cloud&quot; of Linked Open Data to be mined from the text and image data in the complex document structure of a magazine. In our case, we&#039;ll use FactMiners social-gameplay to &quot;fact-mine&quot; a digital archive of the historic Softalk magazine which chronicled the early days of the microcomputer revolution. In this regard, our &quot;sandbox-specific&quot; application is museum informatics. However, there is nothing domain-specific about the solution design we are pursuing.&lt;/p&gt;
&lt;p&gt;With this more general application in mind and in looking for that opportunity where Sherlock can work hand-in-hand with Rainman, it is the &lt;a href=&quot;http://gist.neo4j.org/?8640853&quot;&gt;first part of this GraphGist&lt;/a&gt; series that is the more relevant to the &quot;whole brain&quot; focus of this post.&lt;/p&gt;
&lt;div class=&quot;image-right&quot;&gt;&lt;img src=&quot;/sites/default/files/images/pt2_fig1_pt1meta.png&quot; width=&quot;226&quot; height=&quot;339&quot; alt=&quot;pt2_fig1_pt1meta.png&quot; /&gt;&lt;/div&gt;
&lt;p&gt;In this &lt;a href=&quot;http://gist.neo4j.org/?8640853&quot;&gt;first part of my GraphGist&lt;/a&gt;, I provide a &quot;Hello, World!&quot; scale example of how a graph database can be &#039;self-descriptive&#039; to a layer of smart software that can take advantage of this self-descriptive nature. In this gist, I had some fun exploring the old aphorism from Journalism school, &quot;Dog bites man is nothing, but man bites dog, that&#039;s news!&quot; &lt;/p&gt;
&lt;p&gt;In brief, the assumption is that a &#039;self-descriptive&#039; database is &#039;talking&#039; to something that is listening. Under this design pattern, the listening is done by a complementary layer of &quot;smart software&quot; that can use this information to configure itself for all manner of data analysis, editing, and visualization, etc.&lt;/p&gt;
&lt;p&gt;In the case of the ultra-simple &quot;Man bites Dog&quot; example, the layer of smart software is nothing more elaborate than some generalized metamodel-aware Cypher queries. Cypher is the built-in query language for Neo4j. In my gist example, these queries are used for &quot;news item&quot; discovery and validation. By simple extrapolation you can readily imagine the level of &quot;conscious&quot; processing that could be brought to bear to &quot;think about&quot; the data in a &#039;self-descriptive&#039; graph database. &lt;/p&gt;
&lt;p&gt;With this much overview of the &quot;subconscious&quot; and &quot;conscious&quot; aspects of our FactMiners&#039; brain, we&#039;re ready to look at that opportunity for integration... that place where Rainman meets and works with Sherlock.&lt;/p&gt;
&lt;h2&gt;How Rainman and Sherlock Might Work Together&lt;br /&gt;&lt;/h2&gt;
&lt;div class=&quot;image-right&quot;&gt;&lt;img src=&quot;/sites/default/files/images/sherlock-and-rainman.png&quot; width=&quot;442&quot; height=&quot;262&quot; alt=&quot;sherlock-and-rainman.png&quot; /&gt;&lt;/div&gt;
&lt;p&gt;There is a strong hint at how the Deep Learning &quot;subconscious&quot; processing of Kenny&#039;s Graphify component might fit into the &quot;brain model&quot; of FactMiners. I&#039;ll underline the bits in this quote from Kenny&#039;s article to suggest the integration opportunity: &lt;em&gt;&quot;Graphify gives you a mechanism to &lt;ins&gt;train natural language parsing models&lt;/ins&gt; that extract features of a text using deep learning. When training a model to recognize the meaning of a text, you can send an article of text with a &lt;ins&gt;provided set of labels that describe the nature of the text&lt;/ins&gt;...&quot;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Graphify just needs to be fed a list of those &quot;text nature describing&quot; labels and a pile of text to dive into and away it goes. I believe an excellent source of this &quot;text nature knowledge&quot; -- those labels needed to seed the training and data extraction of Kenny&#039;s &quot;subconscious&quot; Rainman-like text classification process --  those labels are very explicitly represented, maintained and extended in the information encoded in, and organized by, the metamodel subgraph of a &#039;self-descriptive&#039; graph database.&lt;/p&gt;
&lt;p&gt;We should be able to establish a feedback loop where Graphify&#039;s label list is supplied by Sherlock&#039;s &quot;mental model&quot; in the metamodel subgraph, and Graphify&#039;s results are fed back to refine or extend the metamodel. How, even whether, this will all work as envisioned we will discover over the weeks ahead.  &lt;/p&gt;
&lt;p&gt;Next up? We&#039;re looking forward to Kenny&#039;s webinar and to having some serious fun digging into Graphify. &lt;/p&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description>
 <pubDate>Sun, 31 Aug 2014 17:27:22 +0000</pubDate>
 <dc:creator>Jim Salmons</dc:creator>
 <guid isPermaLink="false">34 at http://www.factminers.org</guid>
 <comments>http://www.factminers.org/content/inside-factminers-brain-rainman-meet-sherlock#comments</comments>
</item>
<item>
 <title>Karma to Take a LOD off FactMiners</title>
 <link>http://www.factminers.org/content/karma-take-lod-factminers</link>
 <description>&lt;div class=&quot;field field-name-field-image field-type-image field-label-hidden view-mode-rss view-mode-rss&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;figure class=&quot;clearfix field-item even&quot;&gt;&lt;a href=&quot;http://www.factminers.org/sites/default/files/images/Karma_is_awesome.png&quot;&gt;&lt;img typeof=&quot;foaf:Image&quot; class=&quot;image-style-large&quot; src=&quot;http://www.factminers.org/sites/default/files/styles/large/public/images/Karma_is_awesome.png?itok=PlxcVCOM&quot; width=&quot;480&quot; height=&quot;382&quot; alt=&quot;Collage of photos of Karma is an amazing Open Source &amp;quot;multilingual&amp;quot; ontology-aware cross-model smart-mapper&quot; title=&quot;Karma is an amazing Open Source &amp;quot;multilingual&amp;quot; ontology-aware cross-model smart-mapper&quot; /&gt;&lt;/a&gt;&lt;/figure&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;field field-name-field-tags field-type-taxonomy-term-reference field-label-hidden view-mode-rss view-mode-rss&quot;&gt;&lt;ul class=&quot;field-items&quot;&gt;&lt;li class=&quot;field-item even&quot;&gt;&lt;a href=&quot;/tags/ai-etc&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;AI etc.&lt;/a&gt;&lt;/li&gt;&lt;li class=&quot;field-item odd&quot;&gt;&lt;a href=&quot;/tags/metamodeling&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;Metamodeling&lt;/a&gt;&lt;/li&gt;&lt;li class=&quot;field-item even&quot;&gt;&lt;a href=&quot;/tags/ontologies&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;Ontologies&lt;/a&gt;&lt;/li&gt;&lt;li class=&quot;field-item odd&quot;&gt;&lt;a href=&quot;/tags/semantic-web&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;Semantic Web&lt;/a&gt;&lt;/li&gt;&lt;li class=&quot;field-item even&quot;&gt;&lt;a href=&quot;/tags/linked-open-data&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;Linked Open Data&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div class=&quot;field field-name-body field-type-text-with-summary field-label-hidden view-mode-rss view-mode-rss&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot; property=&quot;content:encoded&quot;&gt;&lt;p&gt;One of the beauties of doing a grassroots Open Source community project is that we are not just open in the terms of our licensing, etc., but open to collaboration and incorporation through sharing both ideas and code. This is why we were ecstatic to learn about Karma.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;http://www.isi.edu/integration/karma/#&quot;&gt;Karma&lt;/a&gt; is an amazing Open Source &quot;multilingual&quot; ontology-aware cross-model smart-mapper providing &quot;Rosetta Stone&quot;-like powers to users coping with the ever-shifting publication of &lt;a href=&quot;http://linkeddata.org/&quot;&gt;Linked Open Data&lt;/a&gt; (LOD). Karma is the evolving brilliant work from the &lt;a href=&quot;http://www.isi.edu/integration/karma/#team&quot;&gt;incredible team of researcher-makers&lt;/a&gt; led by &lt;a href=&quot;http://www.isi.edu/integration/people/knoblock/index.html&quot;&gt;Craig Knoblock&lt;/a&gt; and &lt;a href=&quot;http://www.isi.edu/~szekely/&quot;&gt;Pedro Szekely&lt;/a&gt; of the &lt;a href=&quot;http://www.isi.edu/home&quot;&gt;Information Sciences Institute&lt;/a&gt; at the &lt;strong&gt;University of Southern California&lt;/strong&gt;. &lt;/p&gt;
&lt;p&gt;If we are lucky – because this would be an incredibly difficult, if not impossible, piece of work to duplicate as a subproject within FactMiners itself – Karma will handle this critical &lt;em&gt;&quot;LOD-switchboard&quot; service&lt;/em&gt; as a component of the technology stack for the FactMiners social-game ecosystem. (The Karma codebase is surely going to inform the design of, if not directly contribute as an included component to, the FactMiners Fact Cloud Wizard.)&lt;/p&gt;
&lt;p&gt;I could go on and on about the incredible work the Karma folks have done, but as seeing is believing, please watch and marvel at this demonstration:&lt;br /&gt;&lt;/p&gt;&lt;div class=&quot;image-solo&quot;&gt;
&lt;iframe width=&quot;700&quot; height=&quot;394&quot; src=&quot;//www.youtube.com/embed/1Vaytr09H1w?rel=0&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;
&lt;blockquote&gt;&lt;p&gt;&lt;em&gt;Note: Karma is envisioned and implemented as a much more general data integration tool than the specific use case that excites us. This is an incredible resource for anyone needing &quot;Rosetta Stone&quot;-like features for both data integration and new model generation. All that said, we&#039;re incredibly thankful that they are doing such a great service to the exploding Linked Open Data (LOD) World of which cultural heritage repositories of Libraries, Archives, and Museums (LAM) are among the most &quot;explosive.&quot; This exploding LAM/LOD World is also a virtually limitless expanse of prospective FactMiners&#039; playgrounds.&lt;/em&gt;&lt;/p&gt;&lt;/blockquote&gt;
&lt;h2&gt;Why FactMiners Loves Karma&lt;/h2&gt;
&lt;div class=&quot;image-right&quot;&gt;&lt;img src=&quot;/sites/default/files/images/FactMiners_bigpicture.png&quot; width=&quot;371&quot; height=&quot;313&quot; alt=&quot;FactMiners_bigpicture.png&quot; /&gt;&lt;/div&gt;
&lt;p&gt;A core idea of the FactMiners social-gaming platform is that the &lt;a href=&quot;/content/about-graph-databases-and-factmining&quot;&gt;FactMiners gameplay produces a by-product which is its collection of Fact Clouds&lt;/a&gt;. At their core – and we skim momentarily into just-enough implementation territory – &lt;a href=&quot;/content/neo4j-graphgist-design-docs-line&quot;&gt;FactMiners Fact Clouds are Neo4j graph databases with an embedded metamodel subgraph&lt;/a&gt; providing a rich and extensible &quot;self-descriptive&quot; resource that describes and validates the &quot;actual&quot; data in the Fact Cloud database. In other words, we have a very &quot;civilized&quot; inner-world where each Fact Cloud will be a remarkably expressive datastore of semantically-rich, queriable, discoverable, analytically viewable &quot;facts.&quot;&lt;/p&gt;
&lt;div class=&quot;image-left&quot;&gt;&lt;img src=&quot;/sites/default/files/images/FactMiners_META_overview_2ndEd.png&quot; width=&quot;420&quot; alt=&quot;FactMiners_META_overview_2ndEd.png&quot; /&gt;&lt;/div&gt;
&lt;p&gt;No matter how organized and accessible our Fact Clouds might be internally, responsibly putting these new data resources on-line through LOD publication is itself a daunting and ever-evolving challenge. LOD publishers – which FactMiners Fact Cloud creators will very likely be – need a &quot;smart switchboard&quot; to handle the &quot;semantic pipes&quot; feeding in and out of their semantically-rich datastores, that is, FactMiners Fact Clouds in our case. This daunting task is the exact thing that these incredible research scientists have tackled in birthing and nurturing Karma. The fact that this resource is an Open Source project rather than an enterprise-scaled and priced not-for-you technology is remarkable and greatly appreciated.&lt;/p&gt;
&lt;p&gt;We&#039;ll keep you abreast of our progress exploring the remarkable resource of Karma. In closing, I&#039;d like to thank &lt;a href=&quot;http://rjstein.com/&quot;&gt;Robert Stein&lt;/a&gt;, Deputy Director of the Dallas Museum of Art and a Director of the &lt;a href=&quot;http://mcn.edu/&quot;&gt;Museum Computer Network&lt;/a&gt;, for the referral and introduction to Pedro Szekely. I am looking forward to a planned chat with Pedro after the current USC semester settles into a dull roar.&lt;/p&gt;
&lt;p&gt;So, bottom line, Karma is awesome. Hey, &lt;a href=&quot;http://www.Structr.org&quot;&gt;www.Structr.org&lt;/a&gt; devs... you are SO going to like what this can do for the FactMiners/Structr platform! :D --Jim--&lt;/p&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description>
 <pubDate>Wed, 07 May 2014 21:36:22 +0000</pubDate>
 <dc:creator>Jim Salmons</dc:creator>
 <guid isPermaLink="false">17 at http://www.factminers.org</guid>
 <comments>http://www.factminers.org/content/karma-take-lod-factminers#comments</comments>
</item>
<item>
 <title>The Pursuit of Serious Fun with Images and Robots</title>
 <link>http://www.factminers.org/content/pursuit-serious-fun-images-and-robots</link>
 <description>&lt;div class=&quot;field field-name-field-image field-type-image field-label-hidden view-mode-rss view-mode-rss&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;figure class=&quot;clearfix field-item even&quot; rel=&quot;og:image rdfs:seeAlso&quot; resource=&quot;http://www.factminers.org/sites/default/files/styles/large/public/images/FUTURAMA-Season-6B-Benderama_SECRAAremix.png?itok=WezSpaOQ&quot;&gt;&lt;a href=&quot;http://www.factminers.org/sites/default/files/images/FUTURAMA-Season-6B-Benderama_SECRAAremix.png&quot;&gt;&lt;img typeof=&quot;foaf:Image&quot; class=&quot;image-style-large&quot; src=&quot;http://www.factminers.org/sites/default/files/styles/large/public/images/FUTURAMA-Season-6B-Benderama_SECRAAremix.png?itok=WezSpaOQ&quot; width=&quot;480&quot; height=&quot;277&quot; alt=&quot;&quot; /&gt;&lt;/a&gt;&lt;/figure&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;field field-name-field-tags field-type-taxonomy-term-reference field-label-hidden view-mode-rss view-mode-rss&quot;&gt;&lt;ul class=&quot;field-items&quot;&gt;&lt;li class=&quot;field-item even&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/tags/ai-etc&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;AI etc.&lt;/a&gt;&lt;/li&gt;&lt;li class=&quot;field-item odd&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/tags/robots&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;Robots!&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div class=&quot;field field-name-body field-type-text-with-summary field-label-hidden view-mode-rss view-mode-rss&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot; property=&quot;content:encoded&quot;&gt;&lt;p&gt;&lt;em&gt;Can kids and parents – students and tutors – have fun learning and teaching together AND create something that will contribute to advancing the state-of-the-art of computer vision and artificial intelligence research?&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;In 2009&lt;/strong&gt; when Dr. Li and kindred computer vision and artificial intelligence (CV/AI) researchers looked to the Internet for real world data to test their machine-learning programs doing full scene image recognition, &lt;strong&gt;user-tagged collections of images on sites like Flickr were about as rich a learning resource as could be found&lt;/strong&gt;. Dealing with &quot;dirty&quot;/irrelevant tags in such image collections is a non-trivial challenge for these CV researchers. And it is certainly reasonable for the study designs of these researchers  to have assumed scarcity of (assumed-expensive) human resources for both materials prep and interactive tutoring/training of machine-learning programs.&lt;/p&gt;
&lt;p&gt;By 2015 the &lt;strong&gt;&#039;Seeing Eye Child&#039; Robot Adoption Agency&lt;/strong&gt; plans to have both:&lt;/p&gt;
&lt;ul&gt;&lt;li&gt;a &lt;strong&gt;semantically-rich Fact Cloud&lt;/strong&gt; for a non-trivial subset of the &lt;strong&gt;British Library Image Collection&lt;/strong&gt;, AND&lt;/li&gt;
&lt;li&gt;a &lt;strong&gt;game-energized, crowdsource-powered human-tutor resource&lt;/strong&gt; freely available as a &lt;strong&gt;CV/AI machine-learning program training resource&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;p&gt;Through initial counsel and collaboration with active CV/AI researchers, we will refine and extend our game design and community dynamics to transition the &#039;Seeing Eye Child&#039; Robot Adoption Agency into its mature and sustainable state. &lt;/p&gt;
&lt;p&gt;At sustainable maturity, the&lt;strong&gt; Robot Adoption Agency gaming community will attract programming learner-players&lt;/strong&gt; who will use the game&#039;s &quot;sandbox&quot; resource and community to develop and extend their CV/AI skills and interests. Some proportion of those programmer-players will develop a deep interest in human/computer interaction and contribute to the Robot Adoption Agency&#039;s gaming community by &lt;strong&gt;creating such components as new Open Source training/tutor workflow plug-ins&lt;/strong&gt;. Those with interests driven more by game design and development will likely contribute &lt;strong&gt;presentation/interaction plug-ins to add fun&lt;/strong&gt; and engaging &lt;strong&gt;robot character generators&lt;/strong&gt; for our programmer-players&#039; otherwise unseen running-in-memory agent programs. When we get to this level of community self-support, the game will be in its own good hands.&lt;/p&gt;
&lt;div class=&quot;image-left&quot;&gt;&lt;img src=&quot;/sites/default/files/images/FactMiners_ecosystem.png&quot; width=&quot;500&quot; height=&quot;423&quot; alt=&quot;FactMiners ecosystem&quot; /&gt;&lt;/div&gt;
&lt;p&gt;So, what&#039;s next? Will the FactMiners ecosystem ever be more than just an interesting idea that remains untried? I, for one, don&#039;t intend to let that happen. Step by step, we&#039;re moving forward. In the &lt;a href=&quot;/content/factminers-more-or-less-folksonomy&quot;&gt;&lt;em&gt;&quot;FactMiners: More or Less Folksonomy?&quot;&lt;/em&gt;&lt;/a&gt; article, we have reached out and begun collaborations with museum informatics professionals, both for the counsel of their domain expertise and to find kindred spirits interested in hosting FactMiners Fact Cloud companions for their on-line digital collections. In this article, we&#039;ve described how &lt;strong&gt;the FactMiners ecosystem and its Fact Cloud architecture can accommodate image-based digital collections in addition to the print/text realm of complex magazine document structure&lt;/strong&gt; of our project focus at The Softalk Apple Project. &lt;/p&gt;
&lt;p&gt;In exploring this new use case within digital image collections for the FactMiners ecosystem, &lt;strong&gt;we have identified how our game design can &quot;play&quot; into the domains of computer vision (CV) and artificial intelligence (AI)&lt;/strong&gt;. So among our next steps along the path of bringing the FactMiners ecosystem to life will be to find some kindred spirits in the CV/AI domain interested in exploring just how fun (and useful) it would be to have a British Library Image Collection Fact Cloud companion and &#039;Seeing Eye Child&#039; robot-tutor web service.&lt;/p&gt;
&lt;p&gt;I believe if we can bring the active interest of a CV/AI collaborator to the table as we discuss this idea further with the good folks at the British Library Labs, we&#039;ll be &lt;strong&gt;a BIG step closer to opening the Internet&#039;s first &#039;Seeing Eye Child&#039; Robot Adoption Agency courtesy of the collective efforts of the FactMiners developer community, the British Library Labs, and some as-yet-unidentified CV/AI researchers. Stay tuned...&lt;/strong&gt;&lt;/p&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description>
 <pubDate>Fri, 10 Jan 2014 22:24:39 +0000</pubDate>
 <dc:creator>Jim Salmons</dc:creator>
 <guid isPermaLink="false">5 at http://www.factminers.org</guid>
 <comments>http://www.factminers.org/content/pursuit-serious-fun-images-and-robots#comments</comments>
</item>
<item>
 <title>A Quick Trip to the Stanford Vision Lab</title>
 <link>http://www.factminers.org/content/quick-trip-stanford-vision-lab</link>
 <description>&lt;div class=&quot;field field-name-field-image field-type-image field-label-hidden view-mode-rss view-mode-rss&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;figure class=&quot;clearfix field-item even&quot; rel=&quot;og:image rdfs:seeAlso&quot; resource=&quot;http://www.factminers.org/sites/default/files/styles/large/public/images/fei-fei-li-visionlab_logo.png?itok=BYCVB7pm&quot;&gt;&lt;a href=&quot;http://www.factminers.org/sites/default/files/images/fei-fei-li-visionlab_logo.png&quot;&gt;&lt;img typeof=&quot;foaf:Image&quot; class=&quot;image-style-large&quot; src=&quot;http://www.factminers.org/sites/default/files/styles/large/public/images/fei-fei-li-visionlab_logo.png?itok=BYCVB7pm&quot; width=&quot;281&quot; height=&quot;192&quot; alt=&quot;&quot; /&gt;&lt;/a&gt;&lt;/figure&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;field field-name-field-tags field-type-taxonomy-term-reference field-label-hidden view-mode-rss view-mode-rss&quot;&gt;&lt;ul class=&quot;field-items&quot;&gt;&lt;li class=&quot;field-item even&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/tags/ai-etc&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;AI etc.&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div class=&quot;field field-name-body field-type-text-with-summary field-label-hidden view-mode-rss view-mode-rss&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot; property=&quot;content:encoded&quot;&gt;&lt;p&gt;&lt;a href=&quot;http://ai.stanford.edu/site/research/#li&quot;&gt;Fei-Fei Li&lt;/a&gt; is the director of the Computer and Human &lt;a href=&quot;http://vision.stanford.edu/&quot;&gt;Vision Lab&lt;/a&gt; within the legendary &lt;a href=&quot;http://ai.stanford.edu/&quot;&gt;Stanford Artificial Intelligence Laboratory&lt;/a&gt;. While her research interests and breakthrough contributions to the field are wide-ranging, I want to focus briefly on a 2009 study she and her colleagues did at Princeton, before Dr. Li&#039;s selection to head the prestigious Stanford Vision Lab. &lt;em&gt;&lt;a href=&quot;http://vision.stanford.edu/projects/totalscene/index.html&quot;&gt;&quot;Towards Total Scene Understanding: Classification, Annotation and Segmentation in an Automatic Framework&quot;&lt;/a&gt;&lt;/em&gt; is a remarkable project and representative of the kind of machine-learning image recognition capabilities envisioned at &quot;serious play&quot; in the &#039;Seeing Eye Child&#039; Robot Adoption Agency game. &lt;/p&gt;
&lt;div class=&quot;image-left&quot;&gt;&lt;img src=&quot;/sites/default/files/images/stanford_cvlab_totalscene_coherent_model.png&quot; width=&quot;400&quot; height=&quot;254&quot; alt=&quot;stanford_cvlab_totalscene_coherent_model.png&quot; /&gt;&lt;/div&gt;
&lt;p&gt;Researchers like Dr. Li are creating machine-learning programs that can effectively &#039;look&#039; at a previously unseen image and not just find objects within the image but interpret the full scene. &lt;strong&gt;This example graphic from the project&#039;s summary shows how a total scene is considered as a comprehensive model that incorporates both top-down context, e.g., a polo match, along with both visual and textual (tag) elements.&lt;/strong&gt; For Dr. Li and associates&#039; study, the tags and images were drawn from Flickr – yes, the same popular image-sharing site where the British Library released its 1-million-plus public domain image collection – and the end-to-end automatic machine-learning/scene-recognition process is generally described as shown in the following 3-step workflow diagram from the study.&lt;/p&gt;
&lt;div class=&quot;image-right&quot;&gt;&lt;img src=&quot;/sites/default/files/images/stanford_cvlab_totalscene_automatic_framework_system_flow.png&quot; width=&quot;500&quot; height=&quot;241&quot; alt=&quot;stanford_cvlab_totalscene_automatic_framework_system_flow.png&quot; /&gt;&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;The results of this machine-learning strategy are both remarkable and encouraging for our game design requirements.&lt;/strong&gt; Given a &#039;seed&#039; set of curated images with text label &#039;hints&#039; and a representative collection of similar-scene images clustered by the eight targeted sports categories, the researchers&#039; automated process does a remarkable job of finding and labeling these elements in new unseen images, producing results such as this test image which has been correctly recognized as a polo scene with horse and rider, trees, and grass: &lt;/p&gt;
&lt;div class=&quot;image-solo&quot;&gt;&lt;img src=&quot;/sites/default/files/images/stanford_cvlab_totalscene_tagged_image_polo.png&quot; width=&quot;544&quot; height=&quot;306&quot; alt=&quot;stanford_cvlab_totalscene_tagged_image_polo.png&quot; /&gt;&lt;/div&gt;
&lt;p&gt;While Dr. Li and associate&#039;s strategy is particularly robust and comprehensive, their paper cites 22 additional studies in four broad areas of CV/AI research that tackle the full scene understanding challenge:&lt;/p&gt;
&lt;ul&gt;&lt;li&gt;Image understanding using contextual information&lt;/li&gt;
&lt;li&gt;Machine translation between words and images&lt;/li&gt;
&lt;li&gt;Simultaneous object recognition and segmentation&lt;/li&gt;
&lt;li&gt;Learning semantic visual models from Internet data&lt;/li&gt;
&lt;/ul&gt;&lt;p&gt;So it is safe to say that the domains of &lt;strong&gt;computer vision and artificial intelligence (CV/AI) are at a sufficient stage of capability and active research that the kind of &#039;robot&#039; vision required for our game design is both doable and getting better.&lt;/strong&gt; If you have any doubts (or better yet, interest to know more), I encourage you to read &lt;a href=&quot;http://vision.stanford.edu/projects/totalscene/index.html&quot;&gt;Dr. Li&#039;s project overview&lt;/a&gt;, or better yet, the &lt;a href=&quot;http://vision.stanford.edu/documents/LiSocherFei-Fei_CVPR2009.pdf&quot;&gt;full study PDF&lt;/a&gt; and follow links to cited related research.&lt;/p&gt;
&lt;p&gt;In the &lt;a href=&quot;/content/pursuit-serious-fun-images-and-robots&quot;&gt;concluding post of this series&lt;/a&gt; about the potential for FactMiners to contribute to the &quot;serious fun&quot; at the British Library Image Collection, I&#039;ll set some goals and chart a course forward to add the &lt;strong&gt;&#039;Seeing Eye Child&#039; Robot Adoption Agency&lt;/strong&gt; to the selection of social-learning games to be developed by, and available to, the FactMiners gaming community.&lt;/p&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description>
 <pubDate>Wed, 01 Jan 2014 22:11:31 +0000</pubDate>
 <dc:creator>Jim Salmons</dc:creator>
 <guid isPermaLink="false">6 at http://www.factminers.org</guid>
 <comments>http://www.factminers.org/content/quick-trip-stanford-vision-lab#comments</comments>
</item>
<item>
 <title>Finding the &#039;CV&#039; in &#039;STEM&#039; at the British Library Image Collection</title>
 <link>http://www.factminers.org/content/finding-cv-stem-british-library-image-collection</link>
 <description>&lt;div class=&quot;field field-name-field-image field-type-image field-label-hidden view-mode-rss view-mode-rss&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;figure class=&quot;clearfix field-item even&quot; rel=&quot;og:image rdfs:seeAlso&quot; resource=&quot;http://www.factminers.org/sites/default/files/styles/large/public/images/Tron_1982.jpg?itok=2IVpLFId&quot;&gt;&lt;a href=&quot;http://www.factminers.org/sites/default/files/images/Tron_1982.jpg&quot;&gt;&lt;img typeof=&quot;foaf:Image&quot; class=&quot;image-style-large&quot; src=&quot;http://www.factminers.org/sites/default/files/styles/large/public/images/Tron_1982.jpg?itok=2IVpLFId&quot; width=&quot;214&quot; height=&quot;314&quot; alt=&quot;&quot; /&gt;&lt;/a&gt;&lt;/figure&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;field field-name-field-tags field-type-taxonomy-term-reference field-label-hidden view-mode-rss view-mode-rss&quot;&gt;&lt;ul class=&quot;field-items&quot;&gt;&lt;li class=&quot;field-item even&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/tags/ai-etc&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;AI etc.&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div class=&quot;field field-name-body field-type-text-with-summary field-label-hidden view-mode-rss view-mode-rss&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot; property=&quot;content:encoded&quot;&gt;&lt;p&gt;&lt;strong&gt;Computer Vision&lt;/strong&gt;, or using its popular acronym &#039;&lt;strong&gt;CV&lt;/strong&gt;&#039;, is a domain of scientific knowledge and practice within the domain of Artificial Intelligence which is part of the broad domain of Computer Science. CV is a particularly challenging field that has strong connections into each of the Science, Technology, Engineering, and Math &#039;branches&#039; of the &lt;a href=&quot;http://en.wikipedia.org/wiki/STEM_fields&quot;&gt;STEM fields&lt;/a&gt; of education.&lt;/p&gt;
&lt;p&gt;You only have to glance at the image here of a modern auto assembly line and compare it to one of our not-so-distant mid-20th century lines, or watch the workerbot demo video to know this: &lt;em&gt;What is currently &lt;strong&gt;an exciting bleeding edge of CV and AI research and industrial practice today will be mainstream vital skill areas &lt;/strong&gt;for both entrepreneurial and employment opportunities in the near and foreseeable future. &lt;/em&gt;&lt;/p&gt;
&lt;div class=&quot;image-left&quot;&gt;&lt;img src=&quot;/sites/default/files/images/auto-plant-robots.png&quot; width=&quot;415&quot; height=&quot;307&quot; alt=&quot;auto-plant-robots.png&quot; /&gt;&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;And how will this growing cohort of budding job creators and job fillers gain their skills in Computer Vision programming?&lt;/strong&gt; If they get to&lt;strong&gt; play the &#039;Seeing Eye Child&#039; Robot Adoption Agency game &lt;/strong&gt; during their formative years, a growing number of them could develop strong CV and AI programming skills and interests by writing FactMiners &#039;Robot players&#039; to be effectively &#039;put up&#039; for adoption in the game&#039;s Adoption Agency. Role-wise, these &#039;robot&#039; player/programs are the &#039;real&#039; player&#039;s &lt;a href=&quot;http://en.wikipedia.org/wiki/Agent-based_model&quot;&gt;agent-actor programs&lt;/a&gt;. To borrow the &lt;a href=&quot;http://sohodojo.com/newsletters/rnr_newsletter_05.html#topic3&quot;&gt;anthropomorphic imagery&lt;/a&gt; of &lt;a href=&quot;http://www.imdb.com/title/tt0084827/&quot;&gt;Disney&#039;s 1982 scifi classic, TRON&lt;/a&gt;, our young programming &quot;Flynn&quot; user/players will be sending their &quot;Clu&quot; agent/programs into the Robot Adoption Agency to begin a cyber-learning journey into the British Library Digital Image Collection.&lt;/p&gt;
&lt;div class=&quot;image-right&quot;&gt;
&lt;iframe width=&quot;400&quot; height=&quot;225&quot; src=&quot;//www.youtube.com/embed/UJMHO29FRbA?rel=0&quot; frameborder=&quot;0&quot; allowfullscreen=&quot;&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;
&lt;p&gt;From an &#039;ends-testing&#039; perspective our concern for the game&#039;s Adoption Agency robot supply can more creatively be seen, not as a &lt;em&gt;robot&lt;/em&gt; (i.e. agent-program) supply &lt;strong&gt;problem&lt;/strong&gt;, but rather as a &lt;em&gt;robot-programmer&lt;/em&gt; supply &lt;strong&gt;opportunity&lt;/strong&gt;. Our game&#039;s desirable side-effect is to act as part of its own positive feedback loop whereby the demand for more and better CV/AI programs in the emerging &lt;a href=&quot;http://en.wikipedia.org/wiki/Internet_of_things&quot;&gt;Internet of Things&lt;/a&gt; will generate demand for more and better robot programmers.&lt;/p&gt;
&lt;p&gt;Having both means- and ends-tested the motivation and justification for this game design idea, a fundamental question remains... &lt;strong&gt;Is machine-learning image recognition like what is assumed to be available in the proposed Robot Adoption Agency game even possible?&lt;/strong&gt; And if possible, &lt;strong&gt;is there a place for human-mediated vision training (the &#039;Seeing Eye Child&#039; player&#039;s role) as envisioned in the game?&lt;/strong&gt; To consider these important questions, let&#039;s take a &lt;a href=&quot;/content/quick-trip-stanford-vision-lab&quot;&gt;quick trip to the legendary Stanford AI Lab&lt;/a&gt;.&lt;/p&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description>
 <pubDate>Sun, 29 Dec 2013 01:33:12 +0000</pubDate>
 <dc:creator>Jim Salmons</dc:creator>
 <guid isPermaLink="false">7 at http://www.factminers.org</guid>
 <comments>http://www.factminers.org/content/finding-cv-stem-british-library-image-collection#comments</comments>
</item>
<item>
 <title>Introducing the &#039;Seeing Eye Child&#039; Robot Adoption Agency</title>
 <link>http://www.factminers.org/content/introducing-seeing-eye-child-robot-adoption-agency</link>
 <description>&lt;div class=&quot;field field-name-field-image field-type-image field-label-hidden view-mode-rss view-mode-rss&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;figure class=&quot;clearfix field-item even&quot; rel=&quot;og:image rdfs:seeAlso&quot; resource=&quot;http://www.factminers.org/sites/default/files/styles/large/public/images/auto-plant-robots.png?itok=BJ83LWSP&quot;&gt;&lt;a href=&quot;http://www.factminers.org/sites/default/files/images/auto-plant-robots.png&quot;&gt;&lt;img typeof=&quot;foaf:Image&quot; class=&quot;image-style-large&quot; src=&quot;http://www.factminers.org/sites/default/files/styles/large/public/images/auto-plant-robots.png?itok=BJ83LWSP&quot; width=&quot;480&quot; height=&quot;354&quot; alt=&quot;&quot; /&gt;&lt;/a&gt;&lt;/figure&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;field field-name-field-tags field-type-taxonomy-term-reference field-label-hidden view-mode-rss view-mode-rss&quot;&gt;&lt;ul class=&quot;field-items&quot;&gt;&lt;li class=&quot;field-item even&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/tags/ai-etc&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;AI etc.&lt;/a&gt;&lt;/li&gt;&lt;li class=&quot;field-item odd&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/tags/game-ideas&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;Game Ideas&lt;/a&gt;&lt;/li&gt;&lt;li class=&quot;field-item even&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/tags/robots&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;Robots!&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div class=&quot;field field-name-body field-type-text-with-summary field-label-hidden view-mode-rss view-mode-rss&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot; property=&quot;content:encoded&quot;&gt;&lt;p&gt;In the &lt;a href=&quot;/content/factminers-fact-cloud-british-library-image-collection&quot;&gt;first part of this informal proposal to creatively tap the newly-published British Library Image Collection&lt;/a&gt;, I imagined a plug-in game to be developed as part of the &lt;a href=&quot;/content/factminers-more-or-less-folksonomy&quot;&gt;&lt;strong&gt;FactMiners&lt;/strong&gt; social-game ecosystem&lt;/a&gt;. In this adult/child-interactive early-learning app, gameplayers collectively contribute to building a &lt;strong&gt;Fact Cloud&lt;/strong&gt; of &lt;em&gt;&quot;What&#039;s in this picture&quot;&lt;/em&gt; facts (elementary sentence-like assertions stored in a graph database) for the over one-million images recently uploaded to the &lt;a href=&quot;http://www.flickr.com/photos/britishlibrary&quot;&gt;Flikr Commons&lt;/a&gt;. &lt;strong&gt;Parents playing this new word/picture FactMiners plug-in game with their kids create the Fact Cloud &lt;/strong&gt;that becomes a vital resource for &lt;em&gt;a second new FactMiners game&lt;/em&gt;; the &lt;strong&gt;&#039;Seeing Eye Child&#039; Robot Adoption Agency&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;&#039;Seeing Eye Child&#039; Robot Adoption Agency&lt;/strong&gt; is similar to the &lt;a href=&quot;http://en.wikipedia.org/wiki/Tamagotchi&quot;&gt;&#039;Tamagotchi&#039; or &#039;digital pet&#039;&lt;/a&gt; gaming phenomena that hit in the mid-1990&#039;s and is still going strong. The difference here is that we harness our little cognitive learning machines – AKA FactMiners &lt;strong&gt;game players – to &#039;adopt&#039; a robot &lt;/strong&gt;(AKA a machine-learning program with some form of vision – image intake and analysis – capability) and help it to learn to see and understand its world. As an adoptive &#039;Seeing Eye Child&#039;, players take on the &lt;em&gt;roles&lt;/em&gt; of &lt;strong&gt;coach&lt;/strong&gt; and &lt;strong&gt;referee&lt;/strong&gt; for training sessions where adopted robots learn to see what&#039;s in the British Library images. &lt;/p&gt;
&lt;p&gt;The nimble-thinking among you have likely spotted the weak link in this proposed game design... &lt;strong&gt;robot supply&lt;/strong&gt;. &lt;/p&gt;
&lt;div class=&quot;image-left&quot;&gt;&lt;img src=&quot;/sites/default/files/images/FUTURAMA-Season-6B-Benderama_SECRAAremix.png&quot; width=&quot;520&quot; height=&quot;300&quot; alt=&quot;FUTURAMA-Season-6B-Benderama_SECRAAremix.png&quot; /&gt;&lt;/div&gt;
&lt;p&gt;Are we honestly to believe that the latest industrial robots ready to be brought on-line to a Ford or Toyota auto assembly line are in need of vision-training sessions with young kids mentoring their ability to recognize scenes from 17th-19th century book illustrations? Well, that is most certainly not the case. &lt;/p&gt;
&lt;p&gt;So how can the Fact Cloud creators – those playing the word/picture FactMiners game that creates the Fact Cloud descriptive companion to the British Library Image Collection – how can these players be motivated to create that Fact Cloud if its imagined great use in robot vision training turns out not to be a need at all? What kid is going to wait around a Robot Adoption Agency match-making server&#039;s &#039;waiting room&#039; for an adoptable robot that may never show up?&lt;/p&gt;
&lt;p&gt;Fortunately we can consider both the means and the ends of the Fact Cloud creation effort to answer such important questions. From a &#039;means value&#039; perspective, the image-describing FactMiners gameplay that creates the Fact Cloud is a fun, social, interactive learning activity. &lt;strong&gt;There is an immediate and personal motivation and value for parents, siblings, tutors, and teachers to help little learners build the British Library Image Collection Fact Cloud.&lt;/strong&gt; So even if the robot vision training need were to turn out to be an elusive future-imagining, the &#039;serious fun&#039; of building the British Library Image Collection Fact Cloud is time and energy well-spent in direct, interactive childhood and developmental educational activity.&lt;/p&gt;
&lt;p&gt;Having &#039;means-tested&#039; the effort to create the British Library Image Collection Fact Cloud, in my next post I will turn our attention to the &#039;ends&#039; test – &lt;a href=&quot;/content/finding-cv-stem-british-library-image-collection&quot;&gt;Will we really have a robot supply problem at the &#039;Seeing Eye Child&#039; Robot Adoption Agency?&lt;/a&gt;...&lt;/p&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description>
 <pubDate>Wed, 25 Dec 2013 22:03:17 +0000</pubDate>
 <dc:creator>Jim Salmons</dc:creator>
 <guid isPermaLink="false">8 at http://www.factminers.org</guid>
 <comments>http://www.factminers.org/content/introducing-seeing-eye-child-robot-adoption-agency#comments</comments>
</item>
<item>
 <title>A FactMiners&#039; Fact Cloud for the British Library Image Collection</title>
 <link>http://www.factminers.org/content/factminers-fact-cloud-british-library-image-collection</link>
 <description>&lt;div class=&quot;field field-name-field-image field-type-image field-label-hidden view-mode-rss view-mode-rss&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;figure class=&quot;clearfix field-item even&quot; rel=&quot;og:image rdfs:seeAlso&quot; resource=&quot;http://www.factminers.org/sites/default/files/styles/large/public/images/BritishLibrary_Flickr_images.png?itok=SoxfQp9_&quot;&gt;&lt;a href=&quot;http://www.factminers.org/sites/default/files/images/BritishLibrary_Flickr_images.png&quot;&gt;&lt;img typeof=&quot;foaf:Image&quot; class=&quot;image-style-large&quot; src=&quot;http://www.factminers.org/sites/default/files/styles/large/public/images/BritishLibrary_Flickr_images.png?itok=SoxfQp9_&quot; width=&quot;292&quot; height=&quot;233&quot; alt=&quot;&quot; /&gt;&lt;/a&gt;&lt;/figure&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;field field-name-field-tags field-type-taxonomy-term-reference field-label-hidden view-mode-rss view-mode-rss&quot;&gt;&lt;ul class=&quot;field-items&quot;&gt;&lt;li class=&quot;field-item even&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/tags/openculture&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;OpenCulture&lt;/a&gt;&lt;/li&gt;&lt;li class=&quot;field-item odd&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/tags/ai-etc&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;AI etc.&lt;/a&gt;&lt;/li&gt;&lt;li class=&quot;field-item even&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/tags/game-ideas&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;Game Ideas&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div class=&quot;field field-name-body field-type-text-with-summary field-label-hidden view-mode-rss view-mode-rss&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot; property=&quot;content:encoded&quot;&gt;&lt;p&gt;I was thrilled to read the announcement this week in the &lt;a href=&quot;http://britishlibrary.typepad.co.uk/digital-scholarship/index.html&quot;&gt;British Library Digital Scholarship blog&lt;/a&gt; about the &lt;a href=&quot;http://www.flickr.com/photos/britishlibrary&quot;&gt;Library&#039;s uploading to the Flickr Commons of over 1 million Public Domain images&lt;/a&gt; scanned from 17th, 18th, and 19th century books in the Library&#039;s physical collections. The Flickr image collection makes the individual images easily available for public use. Currently the meta-data about each image includes the most basic source information but nothing about the image itself. In the words of project tech lead Ben O&#039;Steen:&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;We may know which book, volume and page an image was drawn from, but we know nothing about a given image. Consider the image below. The title of the work may suggest the thematic subject matter of any illustrations in the book, but it doesn&#039;t suggest how colourful and arresting these images are.&lt;/p&gt;
&lt;div class=&quot;image-right&quot;&gt;&lt;a href=&quot;http://www.flickr.com/photos/britishlibrary/11075039705/&quot;&gt;&lt;img src=&quot;http://britishlibrary.typepad.co.uk/.a/6a00d8341c464853ef019b029b054d970b-800wi&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;p&gt;	&lt;a href=&quot;http://www.flickr.com/photos/britishlibrary/tags/imagesfrombook001012871/&quot;&gt;See more from this book&lt;/a&gt;: &quot;Historia de las Indias de Nueva-España y islas de Tierra Firme...&quot; (1867)&lt;/p&gt;
&lt;p&gt;	We plan to launch a crowdsourcing application at the beginning of next year, to help describe what the images portray. Our intention is to use this data to train automated classifiers that will run against the whole of the content. The data from this will be as openly licensed as is sensible (given the nature of crowdsourcing) and the code, as always, will be under an open license.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;Ben went on to explain, &quot;Which brings me to the point of this release. &lt;strong&gt;We are looking for new, inventive ways to navigate, find and display these &#039;unseen illustrations&#039;&lt;/strong&gt;.&quot;&lt;/p&gt;
&lt;p&gt;Well, Ben&#039;s challenge got me thinking... &lt;strong&gt;What would be the value of creating a FactMiners&#039; Fact Cloud Companion to the British Libary Public Domain Image Collection?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;And that&#039;s when I had my latest &quot;Eureka Moment&quot; about why the &lt;a href=&quot;/content/factminers-more-or-less-folksonomy&quot;&gt;FactMiners social-game ecosystem&lt;/a&gt; is such a compelling idea (at least to me and a few others at this point :-) ). First, let me briefly describe what a Fact Cloud Companion would look like for the British Library Image Collection before exploring why this is such an exciting and potentially important idea.&lt;/p&gt;
&lt;h2&gt;A FactMiners Fact Cloud for Images: What?&lt;/h2&gt;
&lt;p&gt;When Ben laments that the Library&#039;s image collection does not know anything about the content of the individual images, I believe he &#039;undersold&#039; that statement by alluding to the metadata not informing us how colorful or arresting this image is. But there is a much more significant truth underlying his statement.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Images are incredible &quot;compressed storage&quot; of all the &quot;facts&quot; (verbal assertions) that we instantly understand when we humans look at an image.&lt;/strong&gt; The image Ben referenced above of the man in a ceremonial South American tribal regalia is chuck full of &quot;facts&quot; like:&lt;/p&gt;
&lt;ul&gt;&lt;li&gt;The man is wearing a mask.&lt;/li&gt;
&lt;li&gt;The man is wearing a blue tunic.&lt;/li&gt;
&lt;li&gt;The man is holding a long, pointed, wavy stick.&lt;/li&gt;
&lt;li&gt;The man has a feathered shield in his left hand.&lt;/li&gt;
&lt;li&gt;The man is standing on a fringed rug.&lt;/li&gt;
&lt;li&gt;The man has a beaded bracelet on his right arm.&lt;/li&gt;
&lt;/ul&gt;&lt;p&gt;I&#039;ve written briefly about &lt;a href=&quot;/about_graphs_and_factmining&quot;&gt;how an Open Source graph database, like Neo4j, is an ideal technology for capturing FactMiners&#039; Fact Clouds&lt;/a&gt;. So I won&#039;t belabor the point by drilling down here on these example &#039;image facts&#039; to the level of graph data insertions or related queries. Suffice to say that the means are readily available to design and capture a reasonable and useful graph database of facts/assertions about what is &quot;seen&quot; in the &quot;unseen illustrations&quot; of the British Library image collection.&lt;/p&gt;
&lt;p&gt;Rather, I want to move on quickly to the &quot;A-ha Moment&quot; I had about why creating a Fact Cloud Companion to the British Library Image Collection could be a Very Good Thing.&lt;/p&gt;
&lt;h2&gt;A FactMiners Fact Cloud for Images: Why?&lt;/h2&gt;
&lt;p&gt;Every time we look at an image, our brains decompress that in an &quot;explosion of facts.&quot; By bringing image collections into the FactMiners&#039; &quot;serious play arena&quot; we are, in effect, capturing that &quot;human image decompression&quot; process as a sharable artifact rather than it being a transient individual cognitive event. In other words, &lt;strong&gt;every child goes through the learning process of &quot;seeing&quot; what&#039;s in a picture&lt;/strong&gt;. When these &quot;little learning machines&quot; do a proportion of that natural childhood learning activity by playing FactMiners at the British Library Image Collection, we get a truly interesting &#039;by-product&#039; in the Fact Cloud Companion.&lt;/p&gt;
&lt;div class=&quot;image-left&quot;&gt;&lt;img src=&quot;/sites/default/files/images/danger_will_robinson.jpg&quot; width=&quot;286&quot; height=&quot;362&quot; alt=&quot;danger_will_robinson.jpg&quot; /&gt;&lt;/div&gt;
&lt;p&gt;Beyond the obvious use of a Fact Cloud for folksonomy-class applications supporting source collection public and researcher access, &lt;strong&gt;a FactMiners Fact Cloud Companion of the British Library Public Domain Image Collection would be an invaluable resource&lt;/strong&gt; for that &lt;em&gt;new emerging museum and archive visitor base...&lt;/em&gt; &lt;strong&gt;robots.&lt;/strong&gt; Well, not so much the fully anthropomorphized walking/talking robots, at least not so much just yet. I&#039;m thinking here more like &lt;strong&gt;machine-learning programs&lt;/strong&gt;, specifically those with any form of &#039;image vision&#039; capability – whether by crude file/data &#039;input&#039; or real-time vision sensors.&lt;/p&gt;
&lt;p&gt;Upon entering the British Library Image Collection, our robot/machine-learning-program visitors would find a rich &#039;playground&#039; in which to hone their vision capabilities. All those Fact Cloud &#039;facts&#039; about what is &#039;seen&#039; in the collection&#039;s previously &#039;unseen images&#039; would be available at machine-thinking/learning speed to answer the litany of questions – &quot;What&#039;s that?&quot;, &quot;Is that a snake?&quot;, &quot;Is that boy under the table?&quot; – questions that a machine-learning program might use to refine its vision capabilities.&lt;/p&gt;
&lt;p&gt;So while the primary intent of the project is making these images available for Open Culture sharing and use, there may be some equally valuable side effects of this project. &lt;strong&gt;The British Library Image Collection and its Fact Cloud Companion could become a &quot;go-to&quot; stop for any vision-capable robot or machine-learning program that aspires to better understand the world it sees.&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;A FactMiners Fact Cloud for Images: How?&lt;/h2&gt;
&lt;p&gt;As the good folks at the British Library well know, just getting a good folksonomy social-tagging resource developed for such a huge collection is itself no small task. This is why museums and archives, like the British Library and those collaborating in &lt;a href=&quot;http://www.steve.museum/&quot;&gt;the steve project&lt;/a&gt;, are turning to crowdsourcing methods to get the &#039;heavy-lifting&#039; of these tasks done. &lt;strong&gt;Crowdsourcing goes hand-in-hand with gamification&lt;/strong&gt; in this regard. If we can&#039;t pay you to help us out, at least we can make the work fun, right?&lt;/p&gt;
&lt;div class=&quot;image-right&quot;&gt;&lt;img src=&quot;/sites/default/files/images/FactMiners_kid_playing_app.png&quot; width=&quot;460&quot; height=&quot;316&quot; alt=&quot;FactMiners_kid_playing_app.png&quot; /&gt;&lt;/div&gt;
&lt;p&gt;Well, you don&#039;t have to think too hard to realize that if creating a folksonomy is a big chore, then creating a useful Fact Cloud representing at least a good chunk of the &#039;seen&#039; in the previously &#039;unseen illustrations&#039; of the British Library Image Collection is a Way Too Big Chore. And this might be true. But I think that there is some uniquely wonderful &#039;harness-able labor&#039; to be tapped in this regard. &lt;/p&gt;
&lt;p&gt;I know we can make &lt;strong&gt;a really fun app where parents and older folks can help kids learn by playing; building fact-by-fact a valuable resource at the British Library, for one&lt;/strong&gt;. A learning child is a torrent of cognitive processing. Let a stream of that raw learning energy run through the FactMiners game at the British Library Image Collection and you&#039;d have critical mass in a Fact Cloud faster than you can say, &quot;Danger, Will Robinson!&quot;&lt;/p&gt;
&lt;p&gt;And where might this lead? Well, where this all might lead Big Picture wise is beyond the scope of this post. But I can see it leading to a new, previously unimagined game to add to the mix of social games available to FactMiners players... and it&#039;s a bit of a doozy. :-)&lt;/p&gt;
&lt;p&gt;If the British Library creates a FactMiners Fact Cloud Companion to its Image Collection, and if that Fact Cloud becomes useful to robots (machine-learning programs) as a vision-learning resource, I can see where we would want to add a &lt;strong&gt;&#039;Seeing Eye Child&#039; Robot Adoption Agency Game&lt;/strong&gt; to the FactMiners game plug-ins. What would that game be like?&lt;/p&gt;
&lt;div class=&quot;image-left&quot;&gt;&lt;img src=&quot;/sites/default/files/images/FactMiners_robot_training_kids.png&quot; width=&quot;541&quot; height=&quot;409&quot; alt=&quot;FactMiners_robot_training_kids.png&quot; /&gt;&lt;/div&gt;
&lt;p&gt;Well, as good as an Image Collection Fact Cloud might be to learn from, and as smart as a machine-learning program might be as a learner, a robot&#039;s learning to see isn&#039;t likely to be a fully automated process. So we &lt;strong&gt;create a game where one or more kids &#039;adopt&#039; a robot/machine-learning program to help it learn&lt;/strong&gt;. In this case, the FactMiners player would gain experience points, badges, etc. by being available for &#039;vision training&#039; sessions with the adopted robot. The FactMiners player is, in effect, the referee and coach to the robot as it learns to see. &lt;/p&gt;
&lt;p&gt;It doesn&#039;t take much imagination to see how this could lead to schools fielding teams in &lt;strong&gt;contests to take a &#039;stock&#039; robot/machine-learning-program and train it to enter various vision recognition challenges&lt;/strong&gt;. And when I let my imagination run with these ideas, it gets very interesting real fast. But any run, even of one&#039;s imagination, starts with a first step.&lt;/p&gt;
&lt;p&gt;Will we get a chance to make a Fact Cloud Companion to the British Library Image Collection? I don&#039;t know. This week the British Library took &lt;a href=&quot;http://britishlibrary.typepad.co.uk/digital-scholarship/2013/12/a-million-first-steps.html&quot;&gt;a million first steps&lt;/a&gt; toward making their vast digital image collection available to all for free. Perhaps the first step of posting this article will lead us on a path where we will have some serious fun working with the Library to help kids who help robots learn to see and understand our world.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;--Jim Salmons--&lt;br /&gt;
Cedar Rapids, Iowa USA&lt;/em&gt;&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;&lt;strong&gt;Update:&lt;/strong&gt; An encouraging reply of exploratory interest from the good folks at the British Library Labs has juiced my motivation to further &lt;a href=&quot;/content/introducing-seeing-eye-child-robot-adoption-agency&quot;&gt;explore the potential for the &#039;Seeing Eye Child&#039; Robot Adoption Agency&lt;/a&gt; as a FactMiners plug-in game.&lt;/p&gt;&lt;/blockquote&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description>
 <pubDate>Sun, 15 Dec 2013 21:21:20 +0000</pubDate>
 <dc:creator>Jim Salmons</dc:creator>
 <guid isPermaLink="false">9 at http://www.factminers.org</guid>
 <comments>http://www.factminers.org/content/factminers-fact-cloud-british-library-image-collection#comments</comments>
</item>
</channel>
</rss>
<h1>Uncaught exception thrown in shutdown function.</h1><p>PDOException: SQLSTATE[42000]: Syntax error or access violation: 1142 DELETE command denied to user &amp;#039;factminersAdmin&amp;#039;@&amp;#039;localhost&amp;#039; for table &amp;#039;semaphore&amp;#039;: DELETE FROM {semaphore} 
WHERE  (value = :db_condition_placeholder_0) ; Array
(
    [:db_condition_placeholder_0] =&amp;gt; 9287668516417fe0fb10765.22603898
)
 in lock_release_all() (line 269 of /var/www/webadmin/data/www/factminers.org/html/includes/lock.inc).</p><hr />