Adding Icons To Obsidian's File Explorer

There are plugins available for adding icons to the Obsidian file-explorer, but I decided not to use them for two reasons:

  1. I'm trying to maintain a "plugin diet", on the grounds that the more plugins I add to Obsidian, the more likely I will hit problems related to incompatibility and performance
  2. The plugins I tried, although mostly functional, were nonetheless buggy.

Instead, it is possible to decorate the file-explorer with plugins using nothing other than CSS, via the Obsidian CSS "snippets" feature, and the standard set of Unicode emoji characters. This appeals to me because it avoids the need for a plugin just for what is largely a cosmetic concern.

There are two ways to add emoji to files:

Adding icons to explicitly named files/folders

The image shows files and folders in the root of my Obsidian vault, followed by the CSS needed to decorate them with emoji.


/*ALL FILES AND FOLDERS*/  
.nav-folder-title-content::before, .nav-file-title-content::before {margin-right: 3px;}

/*SPECIFIC FILES AND FOLDERS*/  
.nav-file-title[data-path = "Dashboard.md"] .nav-file-title-content::before {content: '📋';}  
.nav-folder-title[data-path = "Inbox"] .nav-folder-title-content::before {content: '⬇️';}  
.nav-folder-title[data-path = "Clippings"] .nav-folder-title-content::before {content: '📰';}  
.nav-folder-title[data-path = "Diary"] .nav-folder-title-content::before {content: '📆';}  
.nav-folder-title[data-path = "Work"] .nav-folder-title-content::before {content: '👨🏼‍💻';}  
.nav-folder-title[data-path = "Home"] .nav-folder-title-content::before {content: '🏠️';}  
.nav-folder-title[data-path = "Personal"] .nav-folder-title-content::before {content: '👤';}  
.nav-folder-title[data-path $= "Reference"] .nav-folder-title-content::before {content: '🗄️';}  
.nav-folder-title[data-path = "Zettelkasten"] .nav-folder-title-content::before {content: '🧠';}  
.nav-folder-title[data-path = "Blog"] .nav-folder-title-content::before {content: '🌍';}  
.nav-folder-title[data-path = "Indexes"] .nav-folder-title-content::before {content: '📜️';}  
.nav-folder-title[data-path = "config"] .nav-folder-title-content::before {content: '⚙️';}

Adding icons to files/folders by pattern

It is also possible to decorate file/folder names in the Obsidian file-explorer using CSS selectors, like this:

/* FILES AND FOLDER PATTERNS*/  
.nav-folder-title[data-path $= "archive"] .nav-folder-title-content::before {content: '️🏛️';}
.nav-file-title[data-path $= "Tasks.md"] .nav-file-title-content::before {content: '️☑️';}

The first CSS rule decorates any folder path that ends with ('$=') the word "archive", while the second decorates any file which ends with "Tasks.md". These rules apply to any matching folder/file, at any level in the vault.

This is actually quite useful - I generally add a note to my project folders called "Tasks" - and having an icon automatically added to the filename makes it easier to immediately locate in a potentially long list of files.

If you would like to leave a comment - or read other comments - please go to this Mastodon post.

Rendering Images with Hugo

This website is served as static HTML, compiles by a really capable "static-site-generator" called Hugo. I had a problem to solve with rendering images here: sometimes I wanted an image which is local to a particular blog post to also show up in the homepage, which serves the most recent posts. The problem is that the relative URL for the image is different when that content is served on the homepage. I don't want to hard-code absolute URLs, but I do want to use the sources of the webpage in different parts of the website. Therefore, I needed Hugo to somehow intelligently re-write those URLs when compiling the website.

This is where Hugo's relatively new Markdown render hooks come in. I've added the following code to a partial under layouts/_default/_markup/render-image.html

{{ $url := urls.Parse .Destination }}
{{ if or (eq $url.IsAbs true) (hasPrefix .Destination "/") }}
    <img src="{{ .Destination }}" title="{{ .Title }}" alt="{{ .Title }}"/>
{{ else }}
    <img src="{{ .Page.Permalink }}/{{ .Destination }}" title="{{ .Title }}" alt="{{ .Title }}"/>
{{ end }}

This has the effect of prepending the page's absolute URL to the image path at compile time. It is invoked every time a Markdown image element is encountered in the sources. If the image is not local to the page, then the image HTML tag is rendered with the URL unchanged (e.g for external images, or for images served from a folder relative to the webroot, rather than the current page's folder.)

Hugo's render hooks are an interesting and useful addition. As well as images, you can specify render hooks for:

  • image
  • link
  • heading
  • codeblock

Gell-Mann Amnesia Effect

The Gell-Mann Amnesia Effect was coined by the late Michael Crichton in a talk entitled Why Speculate, given to the International Leadership Forum, La Jolla, in 2002. Below is an excerpt from that talk:

Media carries with it a credibility that is totally undeserved. You have all experienced this, in what I call the Murray Gell-Mann Amnesia effect. (I call it by this name because I once discussed it with Murray Gell-Mann, and by dropping a famous name I imply greater importance to myself, and to the effect, than it would otherwise have.)

Briefly stated, the Gell-Mann Amnesia effect works as follows. You open the newspaper to an article on some subject you know well. In Murray's case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward-reversing cause and effect. I call these the "wet streets cause rain" stories. Paper's full of them.

In any case, you read with exasperation or amusement the multiple errors in a story-and then turn the page to national or international affairs, and read with renewed interest as if the rest of the newspaper was somehow more accurate about far-off Palestine than it was about the story you just read. You turn the page, and forget what you know.

That is the Gell-Mann Amnesia effect. I'd point out it does not operate in other arenas of life. In ordinary life, if somebody consistently exaggerates or lies to you, you soon discount everything they say. In court, there is the legal doctrine of falsus in uno, falsus in omnibus, which means untruthful in one part, untruthful in all.

I think this has become worse in recent years. Much of the mainstream press and TV news seems to dwell in the realm of speculation, more than dry, objective reportage. The important lesson is, frankly, to doubt everything you read in the news unless you have reason to trust the source. This is exhausting, and makes the whole business of actually reading "speculative" news reporting somewhat pointless.

As Crichton said, introducing the transcript of the talk on his website:

In recent years, media has increasingly turned away from reporting what has happened to focus on speculation about what may happen in the future. Paying attention to modern media is thus a waste of time.

In recent months I have successfully weaned myself off daily news consumption. I pick up bits and pieces, here and there, but I no longer intentionally go to news sources. At the weekend, I catch up with digests from a few, trusted sources. I do not think this has significantly impaired my awareness of current affairs, while it has certainly saved me from wasting a lot of time!

Folder Indexes With Obsidian and Dataview

Using the excellent Dataview Obsidian plugin, inserting this snippet (below) into the note will create a table listing:

  • all notes in the same folder as the note, and in all sub-folders, recursively
  • all notes which link to the note
  • all notes linked to by the note

Particularly when used in a "folder note" (a note which serves as the key note in any given folder), this is a simple way to create a kind of "section index" for that part of the folder hierarchy.

```dataview
TABLE rows.file.link AS Pages
WHERE
	contains(file.folder, this.file.folder)
	OR contains(file.inlinks, this.file.link)
	OR contains(file.outlinks, this.file.link)
	AND file != this.file
GROUP BY file.folder AS Folder
```
If you would like to leave a comment - or read other comments - please go to this Mastodon post.

Open and Engaged Conference 2023

I attended the British Library's annual Open and Engaged conference on 2023-10-30, held in their conference centre in St Pancras, London. At the time, the British Library had just discovered that they had been subjected to a cyber attack (this is ongoing at the time of writing). Despite the ensuing disruption, with BL staff being unable to access their email or documents, and with the BL's internet access being offline, the staff there managed the remarkable feat of hosting the event with little evidence of the chaos in the background. I found the day interesting, and made the following notes from the various speakers' presentations.

Keynote from Monica Westin

Monica (Internet Archive) gave an entertaining talk on new ownership models for cultural heritage institutions. From this I learned about two interesting initiatives:

Internet Archive Scholar

This was conceived as an archiving solution, but then evolved to become a search service.

This fulltext search index includes over 25 million research articles and other scholarly documents preserved in the Internet Archive. The collection spans from digitized copies of eighteenth century journals through the latest Open Access conference proceedings and pre-prints crawled from the World Wide Web. https://scholar.archive.org

OJS - Beacon

This is a function in the open-source OJS platform to report usage statistics back to a central collection, to aid in ongoing product design and marketing.

GLOBAL USAGE OF OJS More than 8 million items have been published with Open Journal Systems, our open-source publishing software trusted by more than a million scholars in almost every country on the planet. Global Usage of OJS - Public Knowledge Project

Mia Ridge, Living with Machines

Mia (British Library) described the Living with Machines project.

Living with Machines is both a research project, and a bold proposal for a new research paradigm. In this ground-breaking partnership between The Alan Turing Institute, the British Library, and the Universities of Cambridge, East Anglia, Exeter, and London (QMUL, King’s College), historians, data scientists, geographers, computational linguists, and curators have been brought together to examine the human impact of industrial revolution. https://livingwithmachines.ac.uk/about/

Douglas McCarthy, This is not IP I'm familiar with

Douglas (Delft University of Technology) talked about "the strange afterlife and untapped potential of public domain content in GLAM institutions". This was an excellent talk on different perceptions of copyright in the GLAM sector. There was a startling contrast between UK and non-UK institutions in their respective treatment of Digital surrogates of The Rake’s Progress, with UK institutions largely avoiding using public Domain licensing and claiming copyright instead. True to his topic, Douglas has made his slides available here and they are well worth viewing. I particularly liked his use of the "Drake meme" and have been using this in my own work - mostly recently presenting to a workshop in Nigeria. Drake Meme

Emma Karoune, The Turing Way

Emma (The Alan Turing Institute) spoke about Community-led Resources for Open Research and Data Science. Her team has assembled a rich set of resources to support data-science communities.

The Turing Way project is open source, open collaboration, and community-driven. We involve and support a diverse community of contributors to make data science accessible, comprehensible and effective for everyone. Our goal is to provide all the information that researchers and data scientists in academia, industry and the public sector need to ensure that the projects they work on are easy to reproduce and reuse. Welcome — The Turing Way

I have made a note to examine more closely the Community Handbook they have produced - not only for its content, but also for the way in which they have produced it.

Iryna Kuchma, Collective action for driving open science agenda in Africa and Europe

Iryna (EIFL) spoke remotely on this LIBSENSE initiative.

EIFL, WACREN and AJOL will collaborate on a new three-year project to support no-fee open access (OA) publishing in Africa (diamond OA) that launches in November 2023 to empower African diamond OA community of practice and offer cost-efficient, open, public, shared publishing infrastructures. https://libsense.ren.africa/wp-content/uploads/2023/08/LIBSENSE-Collaboration-for-sustainable-open-access-publishing-in-Africa.pptx.pdf

I was interested in this because I have been doing some work with LIBSENSE and am developing an awareness of open-science in Africa more generally.

Five Prerequisites for a Sustainable Knowledge Commons

COAR Infographic

I very much like this infographic from COAR. I've been working with COAR on the Next Generation Repositories Working Group and we have been gradually building a picture of a technological future for repository systems. As this work has progressed over the last year or so, it has gradually become clear that there is an opportunity to describe a sustainable knowledge commons. While the Next Generation Repository group is gradually assembling a picture of the technical components and protocols which can make this work, this infographic covers some other, non-technical aspects which will also be required.

I recommend taking a look at the document from which I have taken this image - it adds some useful context.

My New Venture

Antleaf Logo

Today is my final day at EDINA. Rather than stepping into a new role in another institution, I'm taking a bit of a leap into the unknown. I have started my own consultancy business, Antleaf, a vehicle which allows me to take on new, challenging and rewarding work.

I'm pleased to say that, through Antleaf, I have a contract to act as the Managing Director of the Dublin Core Metadata Initiative (DCMI), and I'm negotiating a contract with an institute in Japan to help with an exciting development there, so Antleaf seems to be off to a good start!

If you need the help of an information professional with both development and management experience, please do get in touch.

It feels like a fresh start for me, which is always an invigorating - if slightly nerve-wracking feeling!

Leaving EDINA

After four good years, I am moving on from EDINA. My last day there will be the 16th October.

I have very much enjoyed my time at EDINA, which has allowed me to work with some very smart people, on some great services and projects. For many years EDINA has made a valuable contribution to the fabric of teaching, learning and research in universities in the UK and I am grateful for having had the chance to be a part of that, working in such areas as scholarly communications, digital preservation, mobile development and citizen science, metadata management, open-access repositories and more. I'd like to thank my colleagues at EDINA for their enthusiasm and support, and for the free exchange of ideas and knowledge which has been a foundation of the culture there.

I would particularly like to thank Peter Burnhill, founder and recently retired director of EDINA. It was Peter who brought me into EDINA, and I have benefited enormously from being able to test ideas against his wisdom and insight.

EDINA is in the process of finding a new direction, and I wish my colleagues there the very best of luck for the future.

As EDINA transitions, it feels like a good time for me to do likewise. I have decided to start my own consultancy, something I have been considering for some time. I am excited (and maybe a little nervous!) about it, but this is already shaping up to give me the freedom to do worthwhile and interesting work. More on this in another post soon!

Melissa Terras keynote, BL Labs Symposium, 2016

These are some rough notes from what I thought was an interesting keynote from Melissa Terras, Director of the UCL Centre for Digital Humanities, at this year's BL Labs Symposium.

Melissa has a blog: Adventures in Digital Cultural Heritage and a recommended book: Defining Digital Humanities

Melissa started by asserting that reuse of digital cultural heritage data is still rare, and that preservation of such data is problematic. Of the content digitised in the National Lottery Fund's New Opportunities programme around the turn of the Millennium, ~60% of the content digitised then is no longer available now.

However, referred to collectively under the unofficial label #openglam, a number of changes have converged to give hope that the situation may be improving:

  • funders now frequently mandate that research data will be made available for long periods - up to 10 years.
  • licensing is greatly simplified with the growing adoption of the Creative Commons
  • technical frameworks, which address the challenge of making such data available for others to use, are becoming available
  • projects are more willing to address these issues

Melissa then went on to describe how UCL has been working with the British Library's archive of digitised 19th century books. These books, numbering 65,000 were digitised by Microsoft and then handed back to the BL in 2012 under a CC0 license.

The data generated by the digitisation of these books, and the subsequent OCR output, comprises about 224GB of text data in ALTO XML format. This is too much data to make available over the network - and it is this fact which creates the need for better infrastructure services to allow researchers to work with the data.

The UCL Centre for Digital Humanities engages with science faculties as well as humanities faculties. Any member of UCL staff can access what is effectively 'unlimited' local compute power. What has become apparent is that this local infrastructure is typically optimised for science with the following characteristics:

  • one large dataset
  • one or two complex queries
  • single output (the answer), often a visualisation

whereas the requirements for a researcher wanting to work with the digitised books data are more like this:

  • to work with 65,000 individual datasets
  • to make one simple query
  • to generate multiple outputs - e.g. hundreds of pages, which the researcher will take away and processes further

UCL has therefore been designing computational platforms which allow users to filter the 65,000 books and find, for example, 300 books about some subject and then to download this data to process on a laptop. This project has also been good for computer science students who have been invited to design platforms to solve these kinds of problems.

Melissa suggested that there was a small number of very common query 'types':

  • searches for all variants of a word
  • searches that return keywords in context traced over time
  • NOT searches for a word or phrase that ignores another word or phrase
  • searches for a word when in close proximity to a second word
  • searches based on image metadata

... all returned in a derived dataset, in context

Melissa proposes that these would give 90% of what those people researching a collection like the BL 19C digitised books would want. Furthermore, librarians are quite capable of applying these basic recipes as a service for researchers, and they can build on these to offer more sophisticated searches.

Melissa identified the following best practices:

  1. support derived datasets - people want to take subset of the data away to process further
  2. document decisions - researchers need to know about the dataset - the decisions about how it was generated, provenance, how their query is working etc.
  3. offer fixed/defined datasets (has the data changed since the query was run?)
  4. support normalisations (e.g. if you find more mentions of your query term in later books, it might be because there are more books in the collection from that year)
more posts (archive)