Processes not tools
< Next Topic | Back to topic list | Previous Topic >
Posted by Amontillado
Sep 25, 2018 at 10:03 PM
I wonder if this is related to the doorway effect, which has a profound effect on my focus - https://www.scientificamerican.com/article/why-walking-through-doorway-makes-you-forget/
The Brain seemed to trigger doorway effect responses. When I clicked on a thought and it became the center of attention, I had a good sense of what I was thinking when I originally added it.
Chris Thompson wrote:
That’s pretty fascinating—you’ve encouraged me to read Whittaker and
>Bergman’s book. I’m curious if they looked at any spatial alternatives
>(like hierarchical folders in a mind map configuration), given that
>locational cues seem to be important?
>
>—Chris
>
>Pixelpunker wrote:
>>
>>Whittaker and Bergman tell me that, surprisingly in light of all the
>>advanced approaches, the traditional folder or outliner hierarchy is
>>best and they have some empirical data to back that up. The explanation
>>for this is that a fixed folder hierarchy leads to retrieval by
>>locational cues and does not tax the verbal system. It’s sort of like
>>why the method of loci works as a mnemonic aid. They also determined
>the
>>optimal number of items per folder and hierarchy depth that would
>>optimize the search time by using linear regression…
>
Posted by Pixelpunker
Sep 26, 2018 at 07:55 AM
Alexander Deliyannis wrote:
Pixelpunker wrote:
>>I once stumbled upon in a German book about time management from 1983
>on
>>a checklist for perfect time management.
>
>And its title is?
The book title was „Keine Zeit?“ by Regula Schräder-Naef. Publisher was Beltz.
Posted by 22111
Sep 27, 2018 at 02:39 PM
PP’s citations:
1) Yeah, when it’s in a book, it becomes the Bible; if in a forum, it’d be crap. What they say a), I said it here years ago (its lack of “retrieval helped by spatial positioning” (both by folder/subfolder position and then “manual” order within its subfolder) was even my argument against tagging, for any info element that’s deemed to become useful within the context of other such elements; I differentiated from “unbound” elements where tagging is perfect for multiple clustering by various criteria of various kinds (collections of customers, media…) ; you could speak of grouping vs clustering or of signed grouping vs unsigned grouping, (real) grouping vs (just) bunching, or similar; the real conceptional difference laying just in the (if optimized) presentation of the possible filing targets (cf. flat storage of “hierarchies” / in fact graphs in pseudo-tree form, in relational db’s and in the file system: in the latter, it’s just the MFT which then “puts it all together” into the form you see in your file manager, as do the relevant crosstables plus the necessary processing code in the db).
And what they say b) in that citation, it’s just crap, the sort of pseudo-scientific crap we’re flooded with nowadays; of course it all depends on the respective nature of data; sets of more than 21 sibling element/item (which can be containers/“branches” or “leaves”) are perfectly acceptable or even advisable if creation of further hierarchisation is unnatural: Whenever you ask yourself in which of the sub-folders you may have put some element, you’ll have done it wrong. Not speaking of positional disambiguation here (car assurance in cars or, correct, in assurances), which is to be resolved by container cloning. Lots of details here in this forum, and in the comments for the last UR offer on bits just days ago, but I suppose that while inexact info in the right place is fine, exact info in the wrong place is unacceptable, so be it. Btw, larger - and even minor - sets of siblings can be divided into subgroups by divider lines, cf. bookmarks in FF or elsewhere; I spoke about this here within the same “tagging vs hierarchy” context. Also, and most importantly, further hierarchisation vs just listing as “siblings” is function of retrieval needs, incl. frequency of access of that sub folder, just as is manual sort vs alphanumeric sort; and last but not least - I think I wrote about that detail here, too: Some elements in such (even “long”) lists should be able to bear some, 1, 2, 3 perhaps, sub-items, automatically “expanded”, as to be technically parents of respective “subfolders”, but visually and conceptually, those items/siblings would just contain some sub-siblings. It’s all about the individual nature, and the individual needs - just like the demand for individual meta data (“attributes”) sets within different folders, in the usual PIM db’s.
2) This is an illegitimate reduction of what’s needed; of course, you’d also need access to your knowledge base, and which the citation misses. But then, the main point within the OP’s post does not seem to have been grasped by any commentator: the perishable nature of most info, this deterioration taking place at totally different speeds for different info, even within the same context. And of course, wading thru all this perished, worse, literally corrupted, info, is both very time consuming, and constitutes, more or less, hindrances for mobile access - not speaking of the possible consequences of relying on outdated info.
The big irony here being that the necessary amount of (real, not simili-) AI will only be available from the Big5 - so the real solution to the real problems will only come from an even much more restricted set of suppliers than the OP had in mind, and, of course, only if you give them total reading access to your data repository. Why then maintain your own data stock? Well, in order for them to be able to weight new info to be sent to you, from your alleged needs derived from the analysis of your old data (just like FB does, and google does if you don’t delete all your web activity every day)... and of course to preserve at least a little - and percentage-wise ever-receding - amount of - dying - genuine data into a time where almost all new data you could get, will be heavily biased.
OP is right in their remark that at the end of the day, it’s about core data for decision-making, not about growing data collections, and we’re all aware of the unsolved problem “storing items into the right place(s) / append all (!) the right / possible key words (tags) to the item VS rely on search only, and how much paid staff time this will cost”
(the latter paradigm seemingly working so well with OL since there it’s almost exclusively about given/known-by-name customers, suppliers, press, faculties (for academics), etc. and then by reverse chronology (and just here and there about some goods or ideas within text / non-meta-data), and from that ideal search situation, lunatics then infer the perfect applicability of the search-only paradigm onto everything),
while - not mentioned by OP - even the decision IF some info is pertinent today OR will be someday within the future, bears some quite incredible cost - both for thinking about it and then for possible discarding - in itself, especially if that info, deemed marginal today, will become crucial tomorrow… AND will not be available then anymore, either by being hidden, or worse, by having been replaced by false information.
Anybody doing extensive web bookmarking instead of - quick - downloading will know what I’m speaking of, and that’s not even mentioning the impending problem that (those of) the Big5 providing the AI will - inevitably - actively care what info (correct, wrong or, most machiavellian, seemingly correct and complete, but, unfortunately, bearing some critical vices and omissions) they deem suitable for you and your kind.
(And yes, I know about half-baked pseudo-AI tries to help with filing, and yes, this could be done much better than at this point in time; also, I said this here before, when in any doubt, download but don’t (deep-) file yet: have it dormant; and perhaps even let those things in higher-up placed inboxes even when possibbly needed: combine visual (skimming) and electronic search of deep-filed-and-ordered info AND of less-deep-filed-and-unordered info, being ready to do some additional, precision-filing for items in the latter group upon those semi-directional search results.)
(Did you know that e.g. imdb (incl. Pro and other sidecars) is amazon? (Most of you certainly did not. And as an aside, Bertelsmann is Europe’s biggest publisher AND Europe’s biggest censor - yes, in Europe censorship has become a viable industry, too…) And that’s only the beginning: In the future, any industry will depend on crucial info above every other means, and info will not be sold anymore, but used inhouse by today’s “info”, tomorrow’s sell-it-all merchants. Btw, Apple isn’t one of them, so you could call’em the Big4, or then, let’s face it, it’s all about amazon-google.)
Posted by Paul Korm
Sep 27, 2018 at 04:49 PM
I don’t pretend to understand 22111’s posting—but this caught my eye. While looking back over the decades I would have liked to have preserved the “core data for [my professional and personal] decision-making” but in probably 95% of the cases I had no idea what decisions I would make in the future and therefore no idea of what “core data” I might need a year, two, five or ten years thence.
Storage is cheap. Search is pretty good on most platforms (except iOS). So in the digital world I’m unconcerned where I stuff things—I almost always find what I needed. In the analog world if I haven’t used the thing in 6 months, it’s trash.
22111 wrote:
>OP is right in their remark that at the end of the day, it’s about core
>data for decision-making, not about growing data collections, and we’re
>all aware of the unsolved problem “storing items into the right place(s)
>/ append all (!) the right / possible key words (tags) to the item VS
>rely on search only, and how much paid staff time this will cost”
Posted by Dellu
Sep 27, 2018 at 05:43 PM
>Storage is cheap. Search is pretty good on most platforms (except iOS).
> So in the digital world I’m unconcerned where I stuff things—I
>almost always find what I needed. In the analog world if I haven’t used
>the thing in 6 months, it’s trash.
I agree with this point.
The idea that information or data gets outdated seems exaggerated. For me, it is not unusual for an academic to cite a work from Ancient Greek, to the 18th or 19th-century analysis of some fact. Bertrand Russell 1905 paper is one of the most cited papers in 2018.
It is very hard to say if any published work can be outdated at all. That sounds so at least in my field (linguistics) because almost every published paper contains some linguistic fact that you can potentially use in your analysis.
I personally don’t attempt to organize or read every PDF article. But, I use all of them as my database to find what I am looking for. I never throw away any PDF file unless it is a duplicate. So far, I have accrued 14GB of them. It is just a local database I constantly search when I need sth.