Visualization of Hook data: graphs, etc

This topic is for the Hook community to discuss current and potential graphical visualization of Hook data.

To see Hook data, Hook contains a context-free search tool and a context sensitive hooked link browser. With the latter, you bring up Hook on a given node, and Hook will show you what’s connected. You can navigate this network: select an entry in the list and hit the right arrow key (or click the > button) to focus on a given node and see what is directly connected to it. This supports contextual information access.

We have several lines of R&D in progress to augment this.

There are some obvious additional visualizations / modes of interaction that users might want, such as showing indirectly connected nodes, nesting (‘outline’ style view, though Hook data is a collection of disjoint networks, not strictly a tree), a traditional master-detail library/bookmarks view, graphical representations/browsing.

We’d like to get feedback and recommendations from our users about approaches to visualization that they would like Hook to support, such as to make it easy to use Hook’s data with visualization libraries. Hook has an AppleScript API which would in principle allow advanced users to create tools to draw graphs of their data. They might want the API to be augmented with specific methods. The point of the topic here is not so much for CogSci Apps to pronounce itself about its direction or preferences, but to let the discussion unfold. Please feel free to mention particular graphical libraries you’d like to use with Hook and what you would like to do with them (such as Neo4j).

5 Likes

This may not be the kind of feedback you are looking for, but I feel it’s important to frame this topic from a broader view first. And forgive me, I am going to go way out…
Our human cognitive abilities have been evolving inside us since long before we were even human. The two earliest of these, and thus most deeply “hard wired” in our physical brains, are what we can term spatial, and social cognition. In other words, we have been thinking about where things are—in relation to each other and us—and what those relations mean—family? friend? foe?—for a very long time.

For this reason—and skipping a lot of stuff—spatially persistent relationship graphs are the most immediately cognitively graspable forms of representation and interface.

Most of our computing GUI is based on various representations of text and glyphs. These function in what we might call “semantic cognition”, which is one of our more recent evolutionary acquisitions (followed by temporal cognition, which most of us still suck at).

Text and glyph UIs manifest as lists and icons; these need to be read and interpreted for semantic meaning, then, theoretically, mapped to some sort of understanding of the system we are interacting with. This puts a tremendous cognitive load on people.

Let me give you an analogy from a previous professional context of mine. (I was a design director at Nokia Maps, responsible for shipping maps and navigation service UX to millions of people all over the world.)

I can give you a set of directions to a place you have never been, textually, as a list of steps. You will slowly, gradually, step-by-step, make it to the destination, assuming two things:

  1. my spatial directions are semantically encoded in a way that is meaningful to you (in your language, clearly, with the same concepts of up and down and left and right and types of landmarks and sequence of steps) and
  2. you are adept at—and have the mental energy and experience to perform— transcoding text to real world experience, matching what you use read to what you see in the world.
    (Believe me, in our extensive global testing experience, no two people produce, nor interpret written directions the same way… nor do they read maps the same way!)

So, when we speak of "finding files and data objects which are related to each other and proximate or distant to our current “context in the system”, why do we rely on long lists of texts and icons?

I propose to you, that the super power of Hook is not the “hooks” themselves, but the contextually appropriate maps (graphs) of your files that they enable.

I am very pleased, Luc, by your post above and I hope this discussion unfolds in a fruitful direction, where we can see all kinds of spatially persistent graphs and maps of contexts and relationships of the “files” and various objects which we work with everyday in our personal “computers”… or knowledge systems. :slight_smile:

3 Likes

It is quite apposite and delightful feedback. We are CogSci Apps after all.

Coincidentally, I am a little bit involved in the META-MORPHOGENESIS project, which posits that the very basis of cognition is spatial, such as

Sloman speculates that perhaps Alan Turing’s morphogenesis paper was not merely about biology but a prolegomenon to understanding how brain chemistry could be the basis of cognition due to its quantum and spatial properties. (Sloman is one of the few who noticed the import of Turing saying (in 1936?) that machines can do ingenuity, but not necessarily ingenuity.)

Fun paper:

Sloman - Diagrams in the Mind?, trying to understand how Mr. Bean can perform a trick with his bathing suit.

I also try to use spatial representation in problem solving and learning.

See also Homage to Aaron Sloman, Winner of the 2020 APA K. Jon Barwise Prize – CogZest.

Having said that, the meta-morphogenesis project project is quite fundamental science. Cognitive science (including AI) has made precious little progress as far as we know understanding fundamental visual-spatial understanding , particularly involving geometry, topology and continuity. Having said that, we know there is lots of research in higher level network/graph development and usage, graphs being discontinuous/relational, and expect this knowledge can be leverage.

Simpler, easier for most users to assimilate and more familiar, I believe.

I look forward to seeing what will arise from this too. Our approach is to facilitate the usage of Hook data (including upcoming developments of the data) by information visualization tools that are local (private to the user), and to invite collaboration with visualization experts (developers, knowledge representation experts, etc.)

1 Like

Visualization of the network of links and associations created with Hook is both my biggest hope for Hook, and my biggest frustration. I intentionally underuse Hook because I cannot easily discover where hooks between files, sites, documents within apps (such as Ulysses) exist.

It’s interesting that the company is interested in how advanced users might want to use software to access a database of hooks and build their own visualizations – but, and don’t take this wrong, that seems to imply that maybe Hook is targeted at the 1/10 of 1% of Mac users who have the technical wherewithal and interest in rolling their own. If that’s the target audience for Hook, then I think wandered into the wrong house. :frowning:

My needs are simpler – and I won’t be writing software. I love the graphs that apps such as Obsidian (or Roam), and others, provide that display nodes and edges for everything in a database. That’s it. Provide that graph for all hooks in Hook’s database; provide a way to navigate it; provide a way to search the graph; and we’re done.**

Hopeful, but not optimistic :sob:
Katie

** I think it is probable that such a graph is a very difficult computational problem, with big hits on performance for Hook and the machine. That’s fine. Just say “Nice. Can’t do it.” :grinning_face_with_smiling_eyes:

2 Likes

Thank you, Katie.

The next big release of Hook will have a new big visualization, but it’s not a graph per se. If you look at most apps showing a window on data out there, they are actually not graphs. IMO, it’s not simply that graphs are harder to draw, but that they are harder to interpret and use.

It’s actually the other way around. I think that the graphs are for a more advanced minority.

We are following Apple who are arguably the best UI designers for the masses, show Finder with its various views and a search tool (i.e., Finder > Views). We have list view, and navigation that is akin to column view (not quite there yet).

But we want to support different approaches, so we have an API and will add more. If a developer wants to take the risk and implement a solution that will appeal to what we think is a small minority, we have an API (that we are open to augmenting for working with Neo4j and the like.) for them.

We have a new very experienced and talented person information management person, who for now will remain anonymous, who is affiliated with CogSci Apps, who is keen on visual representations who will also provide input. So it’s not that we are offloading the design to our customers. But we are opening the discussion to hear opinions.

In a previous project we brought in a graph tool. I warned the PI: this is just eye candy. And to my knowledge that is all it was. Some people liked the candy, but most people just needed to get their work done and didn’t want to bend their brains to using the tool.

Of course, I’m a scientist too, so I am open to the possibility that when we look back at it we will find that “wow, we should have done the graphical stuff sooner.” Providing an API at this point is a safe way to go. When we’ve done the other improvements we will see, based on demand , threads like this, in house experimentation and whatever comes of our ecosystem’s work, what to do.

I.e., I did not start this thread to shoot down the idea of graphs, but to have a discussion and see where it goes, while being honest about my current assessment.

1 Like

I guarantee it. :slight_smile:
Text and icon-based is simply the norm because it was what was possible as our early GUI environments developed and is now ubiquitous. This does not in any way mean it is good, appropriate, effective or efficient for the user.

p.s.: Apple is known for industrial design. Their software UX is pretty, but not regarded very highly at all in professional UX circles. Zero innovation there in fact. They provide solid infrastructure though (SDKs and APIs and programming languages). They want US to figure out the next interfaces. Hook has an opportunity here.

Bonne chance! :slight_smile:

2 Likes

Thank you @borisanthony. I agree. Basically Hook is a response to this opportunity (i.e., we use existing APIs in macOS and various apps, combined with standards where applicable, such as RFC-5322 to address a need; and in fact it’s better that Apple keep their OS stable rather than innovating madly in all directions. They have a huge bug list that they should address first). What I am hearing in this thread is that there is some demand for alternative network-based visualization in Hook. (Like I said, we do currently have in the works non-graph visualizations, and have a collection of other features planned to enhance discovery and navigation. We’ve built a framework from which Hook can evolve.)

We’re listening with interest, and we look forward to specific suggestions and also open to working with other software developers as we help the Hook ecosystem to evolve.

1 Like

I can’t agree more. Apple cracked the nut of designing a premium product for the masses in an aesthetically pleasing way. Hook, which sits on top of Mac OS, implies Apple purposely chose not to include this feature. Hook found a need. Most of the world doesn’t want or need bidirectional links–at the least hasn’t recognized it yet. The niche who does need them also needs spatial organization of troves of information for reasons @borisanthony outlined here. The modern professional, the information worker, is evolving in this direction. Giants before us have trailblazed this into graph DBs, Obsidian, Craft, etc. Users of them are the niches of potential consumers Hook can rely on to foreshadow what the broader niche will market-demand in the future. Hopefully Hook capitalizes on this before their competitors–all of whom don’t integrate with the OS like Hook… yet.

1 Like

As a new Hook user, visual view and navigation would be an excellent addition (with API for developers).

Maybe even if full tag functionality in Hook is implemented, a tag cloud for search would be handy.

I used to program decades ago, but recently have been studying Swift. Every now and then, I make a note about a project to practice coding. After using Hook for less than ten days, the really cool project that popped into mind was a relationship/navigation graph for the Hook data!

Also, I believe that Hook users would appreciate the functionality, as most (if not all) of us are wired for visualization anyway. Besides, can anyone honestly say that in school we navigated to our classrooms by way of our class schedule? Did we all walk around with the schedule in hand looking at the building number, floor level, and classroom number? Of course not. Personally, I couldn’t remember any of those details, but I sure remembered how to visually navigate the campus and find my way to class. :upside_down_face:

However, although I would welcome the graph feature, I would prefer a stable core Hook product. Secondly, if a tag cloud is easier to implement, then at least it is a new way to Find hooked data. But, really, if other software companies would include linking ability to their data, that would probably be the next best thing to any new major feature of Hook that might make Hook less stable and easy-to-use.

Besides waiting for a major feature release, there probably is a way to use the Hook ‘Copy All Links’ feature and then Paste the links into a MindNode (or similar) document. Perhaps those links would be turned into nodes? I don’t know, since I don’t use any mind-mapping software, although I have been researching productivity apps. (Which is what led me to Hook in the first place.)

At this point, I shall close, because I am tired and I believe that I am just rambling on to the point of being incoherent! Thank you for reading/listening.

1 Like

Welcome to the Hook Productivity Forum , @nicToRaLEATe . And thanks for asking.

Yes, this works with MindNode.

“export all links” only exports links currently listed.

How can we export all links in a format that an AI like Napkin can understand?

Sorry about this issue.

Just want to be sure, did you export links from Hookmark preferences window ->General ->Linking->Export, @Cell5TL ?

Thank you

Sorry, my bad. I misunderstood the docs. It works great!

By the way, we can get the filenames out with:

cat “Hookmark export file” | awk -F ‘hook_bookmark://’ ‘{print $2}’ | awk -F ‘\?bookmark’ ‘{print $1}’