Not-Standardized Project Management : IQTELL, Directory Opus, etc.
< Next Topic | Back to topic list | Previous Topic >
Posted by Alexander Deliyannis
Nov 2, 2011 at 06:51 AM
Dr Andus wrote:
>I see it differently. 99pc of people don’t use
>outliners because 1) they’ve never been taught to develop and use and outline for
>writing; 2) they don’t know outliners exist; or 3) their minds do not work
>hierarchically (artist-types who think visually).
I would add that (4) many people tend to be change-averse; having learned specific tools, they are unlikely to try out new ones if there is no strong incentive from outside, e.g. professional obligation.
This is changing however; the web ‘forces’ users to continually learn new tools and interfaces. See the recent changes in the Google applications interfaces, or how Facebook keeps changing. In this context, tools like workflowy are gaining good momentum.
Posted by Graham Rhind
Nov 2, 2011 at 07:10 AM
Fredy wrote:
>
>P.S. Would software developers cease
>to implicate the term “IQ” in their information management softwares, please ?
>Information management even at its very best will NOT enhance your IQ, ...
I would have thought that the IQ used in software names means Information Quality (which they can enhance) and not Intelligence Quotient (which they can’t).
Graham
Posted by JBfrom
Nov 2, 2011 at 07:19 AM
I don’t have any problem with programs using IQ or intelligence related buzzwords.
In my view, IQ is like the bore of the artillery an army has.
Whereas the productivity system is like the logistical infrastructure of an army.
You may have big guns, but without reliable logistics they won’t be putting much firepower on the target.
Posted by JBfrom
Nov 2, 2011 at 07:22 AM
By the way Fredy, it really sounds like you’re trying to create an Emacs Org-mode setup.
It can search within word docs, across your whole computer, etc.
You can embed tags in all your .org files that Org-Mode builds into custom agenda views.
It’s easy to extract reports for stuff like hours worked per project over a time period, and extract only the billable hours from that, versus non-billable tasks.
Just very very powerful, will grow with you as your needs change.
Posted by Fredy
Nov 2, 2011 at 11:35 AM
@ Graham
Interesting aspect, I never thought of this variant (lack of IQ, assuredly) and think you might be right, which invalidates my argument then.
@ Alexander
Since you mention Tabbles, that makes me remember Evernote, the “only” one of these outliners, etc., I only know of mentions by others. It’s simple : The old, residual version wasn’t available anymore when I started to gather information about all those systems, and the new one was cloud-based.
Tabbles didn’t allow for group file launching, and thus, with its graphics, I lost interest in it, thus not knowing much of it, but let’s specify, perhaps it’s a virtual folders prog, perhaps - and more probable - it’s a tagging prog ; btw, I trialled a dozen or so of those tagging progs also, and I remember “trialling” Tabbles within that context.
But where’s the real difference between the two concepts, beyond naming ? I think the idea behind tagging had been to cluster material “spread anywhere”, especially as an overlay to any pre-existing real - physical - classification system, but not necessarily so : you can tag anything out from a flat items’ collection then, and if you then get your tags within some superposed - e.g. tree - structure - as Evernote is said to do -, you can even tag 50k or more items within a flat (no-) structure ; btw, EN is said to allow for some 2 or 3 levels in a rudimentary tree structure, hence not imposing really chaotic storage of your material (if I understand it well).
Whereas a virtual folder system is, in its basic conception, technically identical (?) but implies / superposes (by conception, not by technical need - in theory, all those things could well be in one big flat directory) a pre-existing physical storage system, DIFFERENT from this superposed, ADDITIONAL, secondary filing system. In the end, the only difference is by their starting points : Virtual folders were created to overcome the limitations of the Windows’ physical file storage system not allowing for storing - and good managing - one file in several folders / subfolders it belonged to, so they were a file managers’ thing, whilst tags were invented for the same purpose - in dedicated tagging programs -, but also for tagging / clustering not files, but ITEMS, within some outlining progs.
( I cannot claim co-paternity of the tagging idea, but in my experimental outlinining system back in 1997/98, I co-“invented” the virtual folders idea, with cloned items put into as many virtual folders as you needed, i.e. I created such folders / clones without knowing they had already invented them by others. ;-) )
Regularly, the tagging idea is discussed within a context of “keywords”, whilst the virtual folders idea is commonly discussed within a context of physical folders not being enough for flexible information management, and then comes UR and does (in the finest way possible) clones anywhere within the tree (as said, in my system, it was clones into separate virtual folders, not within an all-englobing tree, or more precisely, my system was a big, virtual tree never to be seen, on screen you only had “subtrees” (but including that famous, superposed “zero” tree I’m speaking of in so many occasions) cascaded by a suite of panes into which sub-collections (e.g. all (real or virtual) siblings of any one of those siblings were loaded, i.e. clicking on an item showed the sub-items within the same pane or another additional pane, etc.
This is to say, in my system, there wasn’t any difference between “real” positioning and “virtual” positioning of any such item, as there isn’t such differenciation today in UR’s cloning feature : once an item has been cloned, there isn’t any “original, real” item left, and then it’s clones, but the source item, and its clones are all equivalent. Please note that a real virtual folders system is conceptually different, since there’s always a physical, real instance of the item, and then possibly one or more, virtual, clones of that, NOT having the same physical quality (I did it in a flat ToolBook “pages” collection, whereas UR does it with a flat database, in both cases the program being a front end to a flat items’ collection).
(For reasons laying within the instability - bad memory management - of the then ToolBook programming language (well documented of many complains of other ToolBook programmers in the web), the program went unstable for most operations beyond some k items, and those “record fields” within these (unvisible) “pages” where content was stored, only allowed for 32k of content, so the system wasn’t marketable even for that reason alone… and even today (= at least some months ago, they’ve got another brand-new version lately I don’t know for that), that 32k limitation has not been lifted.)
There WAS a real tree in my system indeed : when exporting. In order to make that possible, I always had the system check for recursion : Anytime I moved / cloned an item, a procedure was triggered checking if such moving / cloning was allowed, or would create recursion, and if so, it told me why this recursion would then occur - thus, it was impossible to make a child of something the parent of something other item logically the uncle of the first item, etc., etc. - thus, the global tree was observed at any time, even when never be seen on screen. And I must make another correction, to observe this, I had “natural children” and “adoptive children”, only the former were considered for possible recursion problems, I might suppose (I didn’t touch the system for 10 years now, but it worked flawlessly for these core functions of it).
(I remember now, the speciality of the export tree was that it was built up exclusively upon any “natural” children, not also following “adoptive” children, thereby upholding a strict 2-dimensional quality, necessary for managing the exported (sub-) trees in any target program, whilst in my applic itself, any 3-dimensionality was allowed (and flawlessly managed).
The fractionizing of the big tree (about 8k items then) into multiple flat lists (updated anytime upon any renaming / deletion / moving / cloning of an item in real-time) made it easy to do clones, etc., since the target lists could be “open” / displayed on screen as well as the source lists, concurrently (and even group handling / batch jobs were available for these maintenance / management functions) - whereas in systems like UR, it’s not as easy to do is if you don’t make heavy use of hoisting parts of your big tree in multiple tabs, lately.
Now back to Evernote :
If I understand well, and EN “coming from” the outlining idea, EN’s tagging system has originally be meant for tagging items, and since tagging 50k of items within a flat list isn’t possible, they allow for ( but only some, 2-3 ? ) levels of tags, within a tag tree (if I understand well). But now, why fall into the trap to tag items only within EN, why not also tag files with it ?
The big advantage of EN is it’s (if I understand well) incorporation of mails, and so, why not, instead of trying to cramp individual items into the EN system, make it tag your MS Word / Excel files, your AO (or whatever) outlines, etc., etc., on top of some “Inbox” kind of items directly and temporarily put into EN.
So, the only missing function to make EN the perfect overall system would be a group file launching system : right-clicking (or whatever) on a sub-collection heading would launch the contents of that sub-collection. Again, the advantage of such an EN system over a corresponding task launcher system would be the incorporation and the management of mails within those sub-collections, whilst a (= any current) task launcher would not (?) be able to list single mails within such “task” groups to be launched as a group.
( I’m aware that Outlook could be enhanced in such a way, Outlook being perhaps the system the most suited to do such an integrative task, since mail management within any other such a program, not coming from the “mail managers” crowd, would have exorbitant problems to handle mails well (= in an automatted way). And then, EN’s a clown applic now, which I cannot accept for me personally, but we’re discussing concepts here, not my special, additional, individual requirements. )
@ JB 1
To stay within your image : But the Commanding General is the person with the highest IQ, thus ensuring that all logistics and deployments are at the highest possible level, and optimizing strategy. ;-)
@ JB 2
It has been your contributions here, and especially your developments on your site then, that made me come back here, I highly appreciate thinking things thru.
My impression on your site are :
- For most people (if not for me, cf. my background), your findings / questioning things must be brand-new.
- You’re asking the right questions (see below).
- But you don’t give the best possible answers to those questions : And how could you do, since nobody, world-wide, has given the right answers up to now (or in private then, possibly within big corporations like Oracle or something, in order to construct, for some big corps and for 7-digit prices and more, ace software we might all be unaware of).
If I understand your expositions (on your site) well, you try (but for the time being, with means “nobody” could adopt) to resolve the problem that ideally, not only we’d need info access in that third-dimensional way any “cloning of items / files” system” is trying to provide, but on a much more “atomic” level.
The same problem lies behind wiki conceptions, and the German leading Kant specialist - having left this forum for the same reasons I’d left it, some time ago, prevailing babble instead of thinking things thru here (but this has considerably improved lately as I’m happy to acknowledge) - touting CT does that touting precisely because CT’s tries to assist such making-available of atomized-chunks-of-information ; btw, MI’s possibility of referencing paragraphs also, not only items, goes into that same (and right) direction ; and with my own prog late in the nineties, I tried some sort of such a thing indeed, but failed by my lacking programming background :
The real and utmost problem behind such functions being the maintenance of the integrity of such “deep links” when those “atomized” info bits are altered, later on : no prog of my knowledge does do anything about that problem, and I, in my time, wasn’t technically up to resolve the problem I had very well seen.
BTW, there are some DTP progs that handle such deep links rather well (from both sides), but then, they’re meant as intermediary processing tools between your “work space” and the target, be it publication on paper, in the web or whatever.
And the second problem - but that should be resolved more easily than the main problem - with “deep links” is, they trigger the “lost in hyperspace” phenomenon - and that’s why I do not touch any wiki as for now in general :
That information is well there, but it’s only made available but by leaving your core information.
Those bits should made available, then, in the same way as any other additional info : In a link list, in a pane gathering all secondary / related information that contains files, mails, items within your core file or any other file… and of course, there should be (by several click variants) several ways to view those info bits.
[
Another original feature in my nineties’ prog never adopted by any other developer : do “full row select” for your lists, then make your prog check where in the row a (normal, or right, or middle, or double…) click occurs : e.g. within the first 15 p.c. of the (total, so no problems with different lengths of different entries) line’s length ? then show it in the principal pane = change that content ; within the range of 25 to 85 p.c. of the line’s length (= the normal behaviour) ? show the element in the additional pane, your core file remaining available for further editing, in the core pane ; within the last 10 p.c. of the line’s length ?: replace the content of any additional (3rd, 4th) pane with your selection, or open the bit in a 4th pane if you’ve got already info in 3 ! (I had divided my list panes into 4 sections indeed, and they worked like a charm, right at the beginning, right at the end, before (and including) middle (for the principal pane), after middle.)
(I explained this in length in the MI forum last year ; in the same way, you can have 3 right-click menus instead of one, each of them with a bunch of commands belonging together, and rather short, instead of having one such lenghty menu with disparate commands of all sorts).
Today’s screens get bigger and bigger ; when I asked for just an additional “history” pane in the UR forum (allowing for clicking on that precise item to view again, instead of having to enter “go back” an undetermined number of times), I was informed that UR had too many panes altogether as it was - let alone any possibility to show such an item within an extra pane, leaving your core item intact…
]
This is just part of what I had to offer, and of which not a single element has been then be realized of better, real programmers.
But any system to be adopted by “anybody” and being able to resolve these problems must deliver both : the programming capacity, AND a viable GUI. The latter, I’ve had realized 13 years ago already, the former you’re trying, in vain, to “sell” today.
As said elsewhere, pompously, I’m certainly an ace software designer (but an awful programmer) ; but then, most software developers out there are just awful software designers : that second assertion being a fact.
And by design, I mean GUI, AND the unseen architecture behind. Or let me repeat one of my favorite sayings here :
In order to keep a program simple but functional, there’s a lot of complicated programming to do, behind the scenes ; that reductional equation “keep it simple on the outside - by keeping it simple within the source code” is just another lie.