TheBrain 10 released
Started by Alexander Deliyannis
on 10/30/2018
Dr Andus
11/5/2018 9:48 am
moritz wrote:
Well, it's certainly not how they are promoting and selling it. It's called "TheBrain PRO Service" and they are charging USD180 for "exclusive pro features."
For that I would expect pure perfection and luxury, a premium buttery-smooth service. In comparison, WorkFlowy charge $49 and Gingko something similar, and they do deliver what they promise.
The thing is I do have what I thought was a reasonably powerful Windows laptop (albeit Win7 and 3 yrs old), but TheBrain is incapabable of running properly on it.
In my experience the web and mobile apps are primarily companion apps
for the Mac and Windows versions of The Brain.
Well, it's certainly not how they are promoting and selling it. It's called "TheBrain PRO Service" and they are charging USD180 for "exclusive pro features."
For that I would expect pure perfection and luxury, a premium buttery-smooth service. In comparison, WorkFlowy charge $49 and Gingko something similar, and they do deliver what they promise.
If you don't have access to a Mac or Windows machine, I don't know if
The Brain would work as well for you.
The thing is I do have what I thought was a reasonably powerful Windows laptop (albeit Win7 and 3 yrs old), but TheBrain is incapabable of running properly on it.
Jon Polish
11/5/2018 1:14 pm
Hi Dr. Andus:
To answer your questions.
Dr Andus wrote:
My setup is very similar to yours and I am not having the problems you are experiencing. There is a caveat. See below.
Yes. The first several sessions with version 10 were painfully slow for me as well. Over time it has become much more fluid. I think this has something to do with stuff going on in the background. But it really does settle down . I still find the Java based version 8 much better in terms of features and speed (and use it for my work until 10 matures), but version 10 is getting closer.
Installation was ridiculous. I am very patient by nature, but this was truly a test. Updating is worse.
I think it always has been both.
I do not have 10 on this computer, so I am working from memory. Under preferences, there is an option to turn the link labels off. I think it is off by default.
Ah, check if you are syncing. This could contribute to the slowdown. I do not sync.
None that I am aware of, but you could approach many of TheBrain's features with the newest version of MindManager.
My biggest issue has always been that I can never get TheBrain to include attachments and web links in the search results. I have scoured the web and consulted with tech support. It seems to be an issue with Windows 64 bit desktop search upon which TheBrain depends. It works fine on 32 bit flavors though.
Jon
To answer your questions.
Dr Andus wrote:
I have started to trial TheBrain 10: the Windows version, the web
version (in Chrome on a Chromebook), and the Android (v. 9) version (on
a Chromebook).
The initial experience has been very disappointing, even though I love
the concept and I’m in desperate need of a tool like this one.
I’m having major problems with the Windows version. I’m
running it on a 3-yr old Windows 7 mobile workstation with a core i7
processor and 16GB RAM. In the three years I have not come across a
software that I wasn’t able to run due to resource constraints.
My setup is very similar to yours and I am not having the problems you are experiencing. There is a caveat. See below.
Just last week I upgraded my Dragon Naturallyspeaking from Pro v. 14 to
v.15, and it’s running perfectly.
Yet TheBrain 10 is running so slowly that it is practically unusable.
After every click on anything I have to wait 4-5 seconds for anything to
happen, often wondering if my click was even registered. At the same
time the fans are spinning at full speed non-stop while TheBrain is
running, suggesting that it’s a major drain on resources.
Yes. The first several sessions with version 10 were painfully slow for me as well. Over time it has become much more fluid. I think this has something to do with stuff going on in the background. But it really does settle down . I still find the Java based version 8 much better in terms of features and speed (and use it for my work until 10 matures), but version 10 is getting closer.
The installation itself was painfully slow. When I tried to update the
software the next day (as a new version came out), I waited 6 hrs for
the “Updating TheBrain” dialog box to finish before giving
up and cancelling. I tried three times, it’s impossible to update
the software from within itself. I may have to uninstall it and then
reinstall the new version.
Installation was ridiculous. I am very patient by nature, but this was truly a test. Updating is worse.
I haven’t had this sort of trouble installing a software for over
a decade. It felt like time travel into the past. The interface also
feels dated.
Is this primarily a Mac software and the Windows version an
afterthought?
I think it always has been both.
The web version worked a bit better, but it’s missing some
essential features, and even the ones it has don’t always work. It
also takes a long time for some of the features to kick in sometimes
(and I have a decent broadband connection).
For a while I didn’t think I was able to add any notes because
the interface just wouldn’t activate. Or there seems to be an
option to set a link type, but when you set it up, it is not saved and
disappears.
I do not have 10 on this computer, so I am working from memory. Under preferences, there is an option to turn the link labels off. I think it is off by default.
The sync between the various clients also feels archaic and seems to be
rather slow. Having gotten used to working in web apps in recent years,
this one just feels like an unfinished beta from some years ago.
Ah, check if you are syncing. This could contribute to the slowdown. I do not sync.
Interestingly the Android app was the most responsive one of the lot,
but unfortunately it has even fewer features than the web version.
This has been a very frustrating experience, especially considering how
much money they are asking for this. Having seen what this software
could do in theory, I would be willing to consider paying for it. But
how can I do that when the performance is so poor and there are such
gaps in features among the various platforms?
As I was trialling the software, I was thinking that there is a good
concept here, but terrible execution, in terms of product quality (I ran
across some bugs in the web version as well).
My next thought was: how come no one has tried to emulate this idea but
producing it better, with leaner code, using faster servers etc., in a
package that would actually work?
Then I came across this post, so other people have also asked this
question. It seems that there is some kind of a patent preventing the
replication of this idea:
http://forums.thebrain.com/post/thebrain-on-mac-incredibly-slow-9822809
This is a pity because the idea itself doesn’t seem that
revolutionary: essentially it’s Tim Berners-Lee’s idea of
hypertext and hyperlinks, so I don’t get how that could be even
patented.
Is there anything out there that emulates this model and is more
useable, or is this a monopoly “take it or leave it”
situation?
None that I am aware of, but you could approach many of TheBrain's features with the newest version of MindManager.
My biggest issue has always been that I can never get TheBrain to include attachments and web links in the search results. I have scoured the web and consulted with tech support. It seems to be an issue with Windows 64 bit desktop search upon which TheBrain depends. It works fine on 32 bit flavors though.
Jon
Stephen Zeoli
11/5/2018 1:30 pm
Dr Andus,
TheBrain should most definitely work better than that for you. I was using it on an old Windows 7 machine without those kinds of delays. I suspect there is a way to fix this. I suggest going to TheBrain home page and starting a chat... the customer service people who chat with you I have found to be very knowledgeable. They may be able to help you right away. If not, they'll suggest you contact customer service.
I think you'll find this is worth your while, because TheBrain works way better than that... usually.
Also, TheBrain is definitely built for Windows.
Steve Z.
TheBrain should most definitely work better than that for you. I was using it on an old Windows 7 machine without those kinds of delays. I suspect there is a way to fix this. I suggest going to TheBrain home page and starting a chat... the customer service people who chat with you I have found to be very knowledgeable. They may be able to help you right away. If not, they'll suggest you contact customer service.
I think you'll find this is worth your while, because TheBrain works way better than that... usually.
Also, TheBrain is definitely built for Windows.
Steve Z.
Alexander Deliyannis
11/5/2018 4:32 pm
I fully concur. My main machine is a refurbished Core i5 several generations back, with 8 GB. TheBrain 10 has been as fast as previous versions, and the memory footprint is very reasonable (2 threads of about 100 MB each; Firefox seems to demand as much or more for just one tab...). I run TheBrain just after startup and keep it open at all times for orientation, and I've never had any issues.
However, I can confirm that the latest 2-3 updates of TheBrain don't seem to install automatically--update seems to take forever. I close TheBrain, download the latest version and install it without uninstalling first. This seems to work fine.
BTW, auto-update today informs me that version 10.0.25.0 is available, but the download available in the website is still 10.0.24.0, so it seems that existing users are the first to be updated.
Stephen Zeoli wrote:
However, I can confirm that the latest 2-3 updates of TheBrain don't seem to install automatically--update seems to take forever. I close TheBrain, download the latest version and install it without uninstalling first. This seems to work fine.
BTW, auto-update today informs me that version 10.0.25.0 is available, but the download available in the website is still 10.0.24.0, so it seems that existing users are the first to be updated.
Stephen Zeoli wrote:
Dr Andus,
TheBrain should most definitely work better than that for you.
Alexander Deliyannis
11/5/2018 5:16 pm
As far as I understand, TheBrain has not patented the concept of interconnected information items, but the specific visualisation of those interconnections. I don't know to which extent this limits other similar visualisations, but I know of at least two more products: Thinkmap, which drives https://www.visualthesaurus.com/ and can be licensed to developers for their own applications; and Inxight StarTree--Inxight is now owned by SAP which is to use the technology in its own products. Others have been mentioned here as well, e.g. used in visualising molecular models.
As far as I know, TheBrain is the only such application which is consumer-targeted, and is directly integrated with the file system, ready to use.
Dr Andus wrote:
As far as I know, TheBrain is the only such application which is consumer-targeted, and is directly integrated with the file system, ready to use.
Dr Andus wrote:
This is a pity because the idea itself doesn’t seem that
revolutionary: essentially it’s Tim Berners-Lee’s idea of
hypertext and hyperlinks, so I don’t get how that could be even
patented.
Is there anything out there that emulates this model and is more
useable, or is this a monopoly “take it or leave it”
situation?
Dr Andus
11/5/2018 9:31 pm
Jon, Steve, and Alexander,
Thanks for your feedback. I'll try to see if I can work this out with TheBrain support.
I can definitely see the beauty of the thing. The visualisation enforces a particular type of focus on an issue, for which you can see the most immediate context (hierarchical and lateral relationships), but nothing more (although you can see other pinned or recently visited items, which is great).
This is where WorkFlowy is lacking, as long lists of things can become overwhelming, but when you zoom in on an item, you can no longer see the hierarchical context or even the siblings.
I could also make good use of the timeline view for project management purposes.
Thanks for your feedback. I'll try to see if I can work this out with TheBrain support.
I can definitely see the beauty of the thing. The visualisation enforces a particular type of focus on an issue, for which you can see the most immediate context (hierarchical and lateral relationships), but nothing more (although you can see other pinned or recently visited items, which is great).
This is where WorkFlowy is lacking, as long lists of things can become overwhelming, but when you zoom in on an item, you can no longer see the hierarchical context or even the siblings.
I could also make good use of the timeline view for project management purposes.
Dr Andus
11/6/2018 9:04 pm
TheBrain users,
Do you back up your TheBrain externally, and how?
I'm thinking that if one invests so much money, and then time and effort into developing a 30GB database over many years, it would be a shame to lose that for whatever reason.
I came across a user who claimed to have lost data:
https://www.reddit.com/r/hwstartups/comments/2tkbct/tools_for_project_management/co0q5ha/
and the solutions for backup suggested by the company seem kind of complicated and involved and not fool-proof:
http://forums.thebrain.com/post/backing-up-3425-8340032
What would be the most straightforward way to keep your data, and have it in some kind of usable form, should TheBrain ever shut down?
Do you back up your TheBrain externally, and how?
I'm thinking that if one invests so much money, and then time and effort into developing a 30GB database over many years, it would be a shame to lose that for whatever reason.
I came across a user who claimed to have lost data:
https://www.reddit.com/r/hwstartups/comments/2tkbct/tools_for_project_management/co0q5ha/
and the solutions for backup suggested by the company seem kind of complicated and involved and not fool-proof:
http://forums.thebrain.com/post/backing-up-3425-8340032
What would be the most straightforward way to keep your data, and have it in some kind of usable form, should TheBrain ever shut down?
Stephen Zeoli
11/6/2018 11:52 pm
Dr Andus,
Truth is, I haven't been backing up my brains. They live in three places, so I haven't thought about it. But I should. That said, I just checked out the back up command and it seems pretty straight forward. I backed up a brain with 5,000 thoughts and 700 attachments. It took under a minute.
The only issue with backing up is that external attachments are not included.
Steve Z.
Truth is, I haven't been backing up my brains. They live in three places, so I haven't thought about it. But I should. That said, I just checked out the back up command and it seems pretty straight forward. I backed up a brain with 5,000 thoughts and 700 attachments. It took under a minute.
The only issue with backing up is that external attachments are not included.
Steve Z.
Jon Polish
11/7/2018 12:24 pm
One of the major differences between 8 and 10 is that 10 uses a database system that must reside in a specific location whose path cannot exceed 50 characters. Backing up would not be a problem but restoring is. Apparently the developers are concerned about database corruption secondary to users moving databases. I have not experienced this problem with 8.
Out of curiosity, I tried backing up 10 and then continued to add data to my test database. I then restored the backup. In short, it was not pretty. Again, this was very simple with version 8.
I think TheBrain folks want you to keep your database synced with their cloud. I don't want to do that.
There is another option that would avoid data corruption in 10. Create a Brain archive file and use that as a backup.
Jon
Out of curiosity, I tried backing up 10 and then continued to add data to my test database. I then restored the backup. In short, it was not pretty. Again, this was very simple with version 8.
I think TheBrain folks want you to keep your database synced with their cloud. I don't want to do that.
There is another option that would avoid data corruption in 10. Create a Brain archive file and use that as a backup.
Jon
Dr Andus
11/7/2018 12:37 pm
Jon Polish wrote:
I'm a cloud convert (as a Chromebook user), so I don't have a problem with having my database synced.
By I guess I was thinking not only about backing up the TheBrain database to be restored with TheBrain, but also having an archival copy of the data that I could consult even if TheBrain would disappear from the face of the earth (or I would stop my subscription).
So maybe I'm talking about a scheduled, automatic export in a usable format? (Kind of like WorkFlowy, which automatically saves daily copies of its database in its own format, and as a text export, to Dropbox).
I think TheBrain folks want you to keep your database synced with their
cloud. I don't want to do that.
There is another option that would avoid data corruption in 10. Create a
Brain archive file and use that as a backup.
I'm a cloud convert (as a Chromebook user), so I don't have a problem with having my database synced.
By I guess I was thinking not only about backing up the TheBrain database to be restored with TheBrain, but also having an archival copy of the data that I could consult even if TheBrain would disappear from the face of the earth (or I would stop my subscription).
So maybe I'm talking about a scheduled, automatic export in a usable format? (Kind of like WorkFlowy, which automatically saves daily copies of its database in its own format, and as a text export, to Dropbox).
Jon Polish
11/7/2018 1:36 pm
Both 8 and 10 support export to folders. I don't have experience with how this works in 10, but 8 has readable data and attachments.
Jon
Jon
Alexander Deliyannis
11/7/2018 5:02 pm
Dr Andus, I don't know what your relationship to Jung is, or your opinion on Synchronicity, but I experienced for the first time today an inability to sync my main Brain and I felt a chill on my back...
It turns out that TheBrain's sync server is temporarily down "and should be back up and running again soon". I was probably one of the first to experience the issue being at GMT+2.
It was a good opportunity to get acquainted with TheBrain's backup function and setup an automatically cloud-backed up folder to accommodate its archives.
Re access to the files if you stop the subscription: my understanding is that they should be fully readable with the free version of TheBrain.
It turns out that TheBrain's sync server is temporarily down "and should be back up and running again soon". I was probably one of the first to experience the issue being at GMT+2.
It was a good opportunity to get acquainted with TheBrain's backup function and setup an automatically cloud-backed up folder to accommodate its archives.
Re access to the files if you stop the subscription: my understanding is that they should be fully readable with the free version of TheBrain.
Dr Andus
11/7/2018 10:39 pm
Alexander Deliyannis wrote:
Just imagine if Jung heard you say: "I experienced for the first time today an inability to sync my main Brain and I felt a chill on my back..."
He would say, "OK, Alexander, why don't you lie down on this couch over here. Let's talk this over."
From what you say though, it sounds more like a case of "won't sync-hronicity". :-)
Dr Andus, I don't know what your relationship to Jung is, or your
opinion on Synchronicity, but I experienced for the first time today an
inability to sync my main Brain and I felt a chill on my back...
Just imagine if Jung heard you say: "I experienced for the first time today an inability to sync my main Brain and I felt a chill on my back..."
He would say, "OK, Alexander, why don't you lie down on this couch over here. Let's talk this over."
From what you say though, it sounds more like a case of "won't sync-hronicity". :-)
22111
11/8/2018 12:15 pm
"I’m a cloud convert (as a Chromebook user), so I don’t have a problem with having my database synced."
Would you please note that I don't allege TB-clouded data wasn't safe, I'm just speaking in general terms here.
Problem 1: the kind of your data (i.e. your "profession", to put it more bluntly), Problem 2: authority over / possible access to your data (big (e.g.) U.S. competitors, either "directly" or via secret services; these problems are interwoven.
In the E.U., there is now interdiction, for many such (commercial/scientific) data, to have it stored on servers which aren't located within the E.U. (e.g. in the U.S.). Why? Because the NSA and other authorities have (built up?) a reputation to get access to data, in order to transmit it to U.S. authorities and/or U.S. competitors. I don't allege this reputation might be justified, I just say that E.U. authorities consider those risks sufficiently important to have made laws to minimize (as they think) those risks.
Thus, E.U. corporations look out for web storage physically located within the E.U., as some of you will know, and will have to do, too, if they are employed in a corporate environment; their mileage in their own pop-and-mom business may vary.
Now the problem with E.U. authorities being that they either aren't smart enough (except for bothering their own people), or that they leave out important considerations on purpose (U.S. directions?), big and highly-connected U.S. corporations implant such servers within the E.U. so that E.U. corporations store their data with them, within the E.U., and hope there will not be any leaks; let's knock on wood and share their hopes.
e.g. https://www.rubikon.news/artikel/das-amazon-kartell
And indeed, such hopes may be fully justified for e.g. sociological findings, whilst that may not be entirely and always the case for findings in fields like chemistry for example.
"I experienced for the first time today an inability to sync my main Brain"
Since TB had changed their db format, I had thought they now use some (non-factory-encrypted) standard db, with all info stored in the tables "readable" (including pics, tables, etc.) by any good front-end for that db system? (That's the case for any (original or meta) info stored in UR, and accessible/processable-by-SQL-with-scripting by any good SQLite front-end.)
This would apply to any stored data, as said, whilst, by additional scripting, you could also replicate any metadata which TB possibly just creates in run-time, be it for ephemeral use only, or your script including storing that meta-data, within the existing tables or in additional tables, to-be-created on purpose.
As has been said above, their patents just (may) concern their graphical representation(s), and filtering and other technical means will prevent that you will have to cope with "endless lists", the ubiquitous horror of which being expressed almost in any thread in this forum, as well as in the current one.
Thus, IF they don't scramble their data, in order to make it unavailable to you / your corporation / other developers from external means, it all comes down to the question what metadata you will lose on re-import elsewhere, and that will depend entirely upon the means deployed:
Corporations should hold their data within their own hold-it-all application anyway, instead of spreading it over numerous standard applications, and they are able to script such TB import easily, with correct translation of all record fields into their own logical fields-over-tables distribution, used in their (original or adapted) in-house system, and for pop-and-mom ventures, I suppose that even lazy developers like UR's will write finally down the necessary import routines (the target db format may be different, as long as the original format is standard and non-scrambled, and of course I'm leaving out web-storage here, whilst corporations' system often include some web storage nowadays) IF TB goes down (which is quite unlikely within the foreseeable future); the situation is/remains different of course as long as it's just some disenchanted users who simply (and psychologically) cannot bear TB's subscription price anymore:
Getting these fine graphics, together with a web-stored db, comes at a price indeed, and which might of course increase further over the years. But don't say you hadn't been warned, their pricing always having been premium.
Would you please note that I don't allege TB-clouded data wasn't safe, I'm just speaking in general terms here.
Problem 1: the kind of your data (i.e. your "profession", to put it more bluntly), Problem 2: authority over / possible access to your data (big (e.g.) U.S. competitors, either "directly" or via secret services; these problems are interwoven.
In the E.U., there is now interdiction, for many such (commercial/scientific) data, to have it stored on servers which aren't located within the E.U. (e.g. in the U.S.). Why? Because the NSA and other authorities have (built up?) a reputation to get access to data, in order to transmit it to U.S. authorities and/or U.S. competitors. I don't allege this reputation might be justified, I just say that E.U. authorities consider those risks sufficiently important to have made laws to minimize (as they think) those risks.
Thus, E.U. corporations look out for web storage physically located within the E.U., as some of you will know, and will have to do, too, if they are employed in a corporate environment; their mileage in their own pop-and-mom business may vary.
Now the problem with E.U. authorities being that they either aren't smart enough (except for bothering their own people), or that they leave out important considerations on purpose (U.S. directions?), big and highly-connected U.S. corporations implant such servers within the E.U. so that E.U. corporations store their data with them, within the E.U., and hope there will not be any leaks; let's knock on wood and share their hopes.
e.g. https://www.rubikon.news/artikel/das-amazon-kartell
And indeed, such hopes may be fully justified for e.g. sociological findings, whilst that may not be entirely and always the case for findings in fields like chemistry for example.
"I experienced for the first time today an inability to sync my main Brain"
Since TB had changed their db format, I had thought they now use some (non-factory-encrypted) standard db, with all info stored in the tables "readable" (including pics, tables, etc.) by any good front-end for that db system? (That's the case for any (original or meta) info stored in UR, and accessible/processable-by-SQL-with-scripting by any good SQLite front-end.)
This would apply to any stored data, as said, whilst, by additional scripting, you could also replicate any metadata which TB possibly just creates in run-time, be it for ephemeral use only, or your script including storing that meta-data, within the existing tables or in additional tables, to-be-created on purpose.
As has been said above, their patents just (may) concern their graphical representation(s), and filtering and other technical means will prevent that you will have to cope with "endless lists", the ubiquitous horror of which being expressed almost in any thread in this forum, as well as in the current one.
Thus, IF they don't scramble their data, in order to make it unavailable to you / your corporation / other developers from external means, it all comes down to the question what metadata you will lose on re-import elsewhere, and that will depend entirely upon the means deployed:
Corporations should hold their data within their own hold-it-all application anyway, instead of spreading it over numerous standard applications, and they are able to script such TB import easily, with correct translation of all record fields into their own logical fields-over-tables distribution, used in their (original or adapted) in-house system, and for pop-and-mom ventures, I suppose that even lazy developers like UR's will write finally down the necessary import routines (the target db format may be different, as long as the original format is standard and non-scrambled, and of course I'm leaving out web-storage here, whilst corporations' system often include some web storage nowadays) IF TB goes down (which is quite unlikely within the foreseeable future); the situation is/remains different of course as long as it's just some disenchanted users who simply (and psychologically) cannot bear TB's subscription price anymore:
Getting these fine graphics, together with a web-stored db, comes at a price indeed, and which might of course increase further over the years. But don't say you hadn't been warned, their pricing always having been premium.
Alexander Deliyannis
11/10/2018 9:27 am
Just a brief note of caution. What I wrote below is incorrect; to manually install a new version, the previous version should be uninstalled first. All settings are retained.
Alexander Deliyannis wrote:
Alexander Deliyannis wrote:
However, I can confirm that the latest 2-3 updates of TheBrain don't
seem to install automatically--update seems to take forever. I close
TheBrain, download the latest version and install it without
uninstalling first. This seems to work fine.
Dr Andus
11/10/2018 12:29 pm
Alexander Deliyannis wrote:
I did just install the new version over the old one, and it didn't throw up any issues. Unfortunately there was no improvement in the responsiveness of the software on my laptop.
It did occur to me that I could also try to install it on my Chromebook in CrossOver (Wine) to see if it works any better. It emulates Windows XP though.
But as I work in Chrome most of the time, I am warming to the idea of working mainly with the online version.
What I would miss most though would be the events and the timeline feature for project management.
Just a brief note of caution. What I wrote below is incorrect; to
manually install a new version, the previous version should be
uninstalled first. All settings are retained.
I did just install the new version over the old one, and it didn't throw up any issues. Unfortunately there was no improvement in the responsiveness of the software on my laptop.
It did occur to me that I could also try to install it on my Chromebook in CrossOver (Wine) to see if it works any better. It emulates Windows XP though.
But as I work in Chrome most of the time, I am warming to the idea of working mainly with the online version.
What I would miss most though would be the events and the timeline feature for project management.
22111
11/10/2018 1:50 pm
("Your connection is not secure
The owner of www.outlinersoftware.com has configured their website improperly. To protect your information from being stolen, Firefox has not connected to this website.
Learn more…
Report errors like this to help Mozilla identify and block malicious sites" - The good news being that this now only appears for the homepage, not for any single page anymore, as it had done for weeks now.)
From that reddit link above:
"thumperj - Guys, I'm going to strongly recommend against TheBrain software.
I used them a while back to take an enormous amount of notes, some incredibly important data. On[e] one of their upgrades, it killed my data. Now, before I get slammed, yes, I screwed up because I didn't have an upgrade. [= backup? But here, it gets interesting:]
However, unfortunately, they were very unforthcoming on exactly what needed to be backed up. I was running it off a USB key, something I was specifically told was supported. I've spent several years on and off trying to recover the data I lost. I've literally begged the company to just tell me what database they use or the password to the data files so I could somehow recover this precious data. They've been barely polite and completely useless.
Choosing to take a chance on TheBrain software is a decision I'll regret for the rest of my life. :( tl;dr; For the love of god, don't use TheBrain software for anything important. Anything would have been a better decision."
Well, well, well, let's put this straight:
Even years ago, there were rumors they systematically deleted forum posts they weren't happy with, so nothing surprising related here. (And of course they deleted mine (re import-export, and items' note fields), but since mine are outliers to the general feel-good-and-don't-harm-feelings-by-addressing-ugly-facts blahblah, that doesn't count of course.)
"begged the company to just tell me what database they use or the password to the data files" - see what I mean, in my post above? Thus, you're well advised to try to open the thing with a frontend (trials of several formats), and if that doesn't work, and/or even first-hand, in order to possibly get the format in question though, try with some really good editor like emeditor, FlexHEX...
But there's another consideration, and the result of your try could allow for even using such application whose developers withhold, from paying users, their own data:
First, try the given export(s variants). If the result is somewhat acceptable to you, the first condition is met, and thus try out if their application, on a freshly-Windowed (W7 or ancient W10, = reproducible setup) or shelved, AND NOT WEB-CONNECTED pc, can be installed, and works fully, i.e. including the export, and even for big datasets, for e.g. 30 days (trial mode since it cannot phone home (anymore, by your means or theirs)): e.g., UR's dedicated trial can not do this, but there would not be any such problem with their paid version, i.e. does NOT phone home before to become, or in order to continue to be, fully functional: Second condition is met then.
But then, there's a catch, of course: Your "production" installallation will get updates, of which you will probably quite happy, but possibly without thinking about the risk that by the developers slightly changing some details, neither your current work nor your ancient work, held within the updated installation, will be processable (well, or even at all) by your shelved installation anymore... and of course, the (possibly even partly, behind the scenes) updated version will not install on the shelved / not-connected pc anymore (minor Windows updates), or will not be fully functional over there anymore.
So then you will be screwed, in such a scenario where the new version which you will have used in the meantime, even just for the very last days, can not or will not phone home anymore - for whatever reason this will occur then -, before working according to your needs (here: export your stuff, in order to get it out).
In this context, it's also worthwhile to remind you that monstrous db's usually, and for obvious reasons, get less often backed-up than file-system based data repertories, all the less so since incremental backups of obscure db formats don't work as expected most of the time.
And yes, you could naively try out a 3-pc system, with each update (known by you) then tried out, in production! (oh, my!), for weeks onto a not-web-connected pc, and before "trusting" your main production system again; such a system will soon fall apart, as well, in most use cases, a system which refrains from using the potentially dangerous application on any web-connected system to begin with, just trying out new versions for some weeks on a web-connected pc - and that's not only because the checks of legitimate use (which are frequently come across in applications which scramble user data without giving the code to the user) will reliably prevent you from protecting yourself from data loss for such reasons, sooner or later.
Thus, my conclusion would probably be, that data put into such applications where you can never be sure, cannot be that important to their users to begin with, so this would be perfectly in line with the other aspect, the one that, by web storage, they give potential access to their data to third parties which have even possibly far bigger financial and organizational means, so that they can carry forward their findings faster and/or on a bigger scale than themselves.
But then, it's all about big ideas being realized, not about by whom, right?
(Well, this latter consideration wouldn't apply in just all and every case though, or then Spandau Project instead of Manhattan Project, anyone?)
The owner of www.outlinersoftware.com has configured their website improperly. To protect your information from being stolen, Firefox has not connected to this website.
Learn more…
Report errors like this to help Mozilla identify and block malicious sites" - The good news being that this now only appears for the homepage, not for any single page anymore, as it had done for weeks now.)
From that reddit link above:
"thumperj - Guys, I'm going to strongly recommend against TheBrain software.
I used them a while back to take an enormous amount of notes, some incredibly important data. On[e] one of their upgrades, it killed my data. Now, before I get slammed, yes, I screwed up because I didn't have an upgrade. [= backup? But here, it gets interesting:]
However, unfortunately, they were very unforthcoming on exactly what needed to be backed up. I was running it off a USB key, something I was specifically told was supported. I've spent several years on and off trying to recover the data I lost. I've literally begged the company to just tell me what database they use or the password to the data files so I could somehow recover this precious data. They've been barely polite and completely useless.
Choosing to take a chance on TheBrain software is a decision I'll regret for the rest of my life. :( tl;dr; For the love of god, don't use TheBrain software for anything important. Anything would have been a better decision."
Well, well, well, let's put this straight:
Even years ago, there were rumors they systematically deleted forum posts they weren't happy with, so nothing surprising related here. (And of course they deleted mine (re import-export, and items' note fields), but since mine are outliers to the general feel-good-and-don't-harm-feelings-by-addressing-ugly-facts blahblah, that doesn't count of course.)
"begged the company to just tell me what database they use or the password to the data files" - see what I mean, in my post above? Thus, you're well advised to try to open the thing with a frontend (trials of several formats), and if that doesn't work, and/or even first-hand, in order to possibly get the format in question though, try with some really good editor like emeditor, FlexHEX...
But there's another consideration, and the result of your try could allow for even using such application whose developers withhold, from paying users, their own data:
First, try the given export(s variants). If the result is somewhat acceptable to you, the first condition is met, and thus try out if their application, on a freshly-Windowed (W7 or ancient W10, = reproducible setup) or shelved, AND NOT WEB-CONNECTED pc, can be installed, and works fully, i.e. including the export, and even for big datasets, for e.g. 30 days (trial mode since it cannot phone home (anymore, by your means or theirs)): e.g., UR's dedicated trial can not do this, but there would not be any such problem with their paid version, i.e. does NOT phone home before to become, or in order to continue to be, fully functional: Second condition is met then.
But then, there's a catch, of course: Your "production" installallation will get updates, of which you will probably quite happy, but possibly without thinking about the risk that by the developers slightly changing some details, neither your current work nor your ancient work, held within the updated installation, will be processable (well, or even at all) by your shelved installation anymore... and of course, the (possibly even partly, behind the scenes) updated version will not install on the shelved / not-connected pc anymore (minor Windows updates), or will not be fully functional over there anymore.
So then you will be screwed, in such a scenario where the new version which you will have used in the meantime, even just for the very last days, can not or will not phone home anymore - for whatever reason this will occur then -, before working according to your needs (here: export your stuff, in order to get it out).
In this context, it's also worthwhile to remind you that monstrous db's usually, and for obvious reasons, get less often backed-up than file-system based data repertories, all the less so since incremental backups of obscure db formats don't work as expected most of the time.
And yes, you could naively try out a 3-pc system, with each update (known by you) then tried out, in production! (oh, my!), for weeks onto a not-web-connected pc, and before "trusting" your main production system again; such a system will soon fall apart, as well, in most use cases, a system which refrains from using the potentially dangerous application on any web-connected system to begin with, just trying out new versions for some weeks on a web-connected pc - and that's not only because the checks of legitimate use (which are frequently come across in applications which scramble user data without giving the code to the user) will reliably prevent you from protecting yourself from data loss for such reasons, sooner or later.
Thus, my conclusion would probably be, that data put into such applications where you can never be sure, cannot be that important to their users to begin with, so this would be perfectly in line with the other aspect, the one that, by web storage, they give potential access to their data to third parties which have even possibly far bigger financial and organizational means, so that they can carry forward their findings faster and/or on a bigger scale than themselves.
But then, it's all about big ideas being realized, not about by whom, right?
(Well, this latter consideration wouldn't apply in just all and every case though, or then Spandau Project instead of Manhattan Project, anyone?)
Dr Andus
11/11/2018 4:41 pm
Dr Andus wrote:
OK, tried this, it didn't work. I let the installation run for several hours, but it got stuck halfway through, so I cancelled it in the end.
But this might have as much to do with CrossOver, with which I haven't had much success on Chrome OS with other Windows software either.
It did occur to me that I could also try to install it on my Chromebook
in CrossOver (Wine) to see if it works any better. It emulates Windows
XP though.
OK, tried this, it didn't work. I let the installation run for several hours, but it got stuck halfway through, so I cancelled it in the end.
But this might have as much to do with CrossOver, with which I haven't had much success on Chrome OS with other Windows software either.
Amontillado
11/11/2018 5:49 pm
22111,
I can agree with your conclusions, at least as they apply to the long-obsolete version of The Brain I last used, but I'm trying to remember how The Brain really fit together.
I lost data. The Brain customer support was very nice, pointed me to a customer's Brain with a half million thoughts, and said there was no history of losing data. I moved on to OneNote, and eventually to DEVONThink.
My now-musty recollection of The Brain is there was no proprietary database used as a sole repository of structure. There was an XML file, much like Scrivener ties everything together with an XML file. Maybe I had to export to XML format. However I arrived at the XML, I had the entire structure of the Brain itself.
The files themselves were just files in the filesystem, much like Scrivener and DEVONThink. OneNote, I believe, is the only one of those three that stores content in a non-standard form.
Perhaps I misremember.
In any case, I'm sure the current version of The Brain is highly evolved from what I last saw.
I can agree with your conclusions, at least as they apply to the long-obsolete version of The Brain I last used, but I'm trying to remember how The Brain really fit together.
I lost data. The Brain customer support was very nice, pointed me to a customer's Brain with a half million thoughts, and said there was no history of losing data. I moved on to OneNote, and eventually to DEVONThink.
My now-musty recollection of The Brain is there was no proprietary database used as a sole repository of structure. There was an XML file, much like Scrivener ties everything together with an XML file. Maybe I had to export to XML format. However I arrived at the XML, I had the entire structure of the Brain itself.
The files themselves were just files in the filesystem, much like Scrivener and DEVONThink. OneNote, I believe, is the only one of those three that stores content in a non-standard form.
Perhaps I misremember.
In any case, I'm sure the current version of The Brain is highly evolved from what I last saw.
22111
11/15/2018 7:02 pm
Amontillado,
Thank you for this info, which is not in accordance with what that other users (link above) said, but he could have been mistaken. Thus we probably have:
- metadata (only, but incl. titling) in XML (which would explain why a 500,000 items "brain" would remain manageable; with (text) content in XML, too, that would have been another story; btw, the Adobe pic management software "Bridge" also uses XML, in a seemingly identical model (with Adobe LR, it had been just a little bit different: metadata and previews in an SQLite db, pics in the file system: in hidden folders, and named by strings assigned by the db):
- content/pics as separate (for text: XML again? or rtf? hmtl?) files in the file system.
IF this is true today, I don't see any problem with this model:
- XML metadata readable by the import script of other (in case db-based) programs > "tree" and several link architectures can be completely rebuilt
- XML/rtf/html content data readable by other programs, ditto for pics and "external" documents (in fact, this system would then probably not make any difference between internal and external documents, or perhaps (but not necessarily so) for document titling: "internal" content = a "thought" 's note field = external file in hidden folder and titled with string assigned by XML (similar to LR above), and "real" external files in regular folders and with their original titles
IF that is the current state of affaires with TB.
As for data leaks, there is traditional industrial espionage, too, of course; just today:
https://www.focus.de/regional/koeln/koeln-china-spitzel-enttarnt-e-mails-an-herrn-u-irrer-spionage-krimi-mitten-koeln_id_9917256.html
where two Germans with Chinese origins spied for the Chinese, in Lanxess corporation, which is the new denomination of the traditional (and universally known) Bayer Chemical and Polymerics plants.
Considering the fact that "fully-functional" mobile devices have been slimmed down to around 1 kg (Windows 10 i7, Apple ditto), that mobile storage contains 2 tb (and more now) without problems, I don't really see the alleged necessity for TB-or-other-third-parties-rented web space, with the risk of data leaks from there, for most use cases; I see the necessity/utility, for a traveling salesman in the industrial sector, to the "plant" db (the plants probably being in China, nowadays, while the mainframe is somewhere in the U.S., but access to it would come handy indeed), but for most use cases, almost anything could be duplicated on mobile, fully-functional devices, with sync to the office pc once (or maybe even several times) a day.
Then there is collaboration, not bound to the same premises anymore: I see these use cases, but I doubt people, once the will have gone back from iPads and the like to (now really) mobile "full" pc's/macs, will really need or profit from web storage, except for select, very minor datasets, and Towne didn't need such a thing as "collaborative screenwriting software" for "Chinatown", whilst for that incredibly unbearable German TV crap for the masses, today's writers probably are asked to use such software: they then get a pittance each, Town got paid decently.
And Bayer-Lanxess probably didn't only made at least two very gross mistakes in the HR department, but also in their information management, and probably continue to do so.
Thank you for this info, which is not in accordance with what that other users (link above) said, but he could have been mistaken. Thus we probably have:
- metadata (only, but incl. titling) in XML (which would explain why a 500,000 items "brain" would remain manageable; with (text) content in XML, too, that would have been another story; btw, the Adobe pic management software "Bridge" also uses XML, in a seemingly identical model (with Adobe LR, it had been just a little bit different: metadata and previews in an SQLite db, pics in the file system: in hidden folders, and named by strings assigned by the db):
- content/pics as separate (for text: XML again? or rtf? hmtl?) files in the file system.
IF this is true today, I don't see any problem with this model:
- XML metadata readable by the import script of other (in case db-based) programs > "tree" and several link architectures can be completely rebuilt
- XML/rtf/html content data readable by other programs, ditto for pics and "external" documents (in fact, this system would then probably not make any difference between internal and external documents, or perhaps (but not necessarily so) for document titling: "internal" content = a "thought" 's note field = external file in hidden folder and titled with string assigned by XML (similar to LR above), and "real" external files in regular folders and with their original titles
NO scrambling (encryption) of your data while it's on your own system = XML/rtf/html readable by ANY editor, so as long there is always all the (un-encrypted) data on your pc (, too), no problem.
IF that is the current state of affaires with TB.
As for data leaks, there is traditional industrial espionage, too, of course; just today:
https://www.focus.de/regional/koeln/koeln-china-spitzel-enttarnt-e-mails-an-herrn-u-irrer-spionage-krimi-mitten-koeln_id_9917256.html
where two Germans with Chinese origins spied for the Chinese, in Lanxess corporation, which is the new denomination of the traditional (and universally known) Bayer Chemical and Polymerics plants.
Considering the fact that "fully-functional" mobile devices have been slimmed down to around 1 kg (Windows 10 i7, Apple ditto), that mobile storage contains 2 tb (and more now) without problems, I don't really see the alleged necessity for TB-or-other-third-parties-rented web space, with the risk of data leaks from there, for most use cases; I see the necessity/utility, for a traveling salesman in the industrial sector, to the "plant" db (the plants probably being in China, nowadays, while the mainframe is somewhere in the U.S., but access to it would come handy indeed), but for most use cases, almost anything could be duplicated on mobile, fully-functional devices, with sync to the office pc once (or maybe even several times) a day.
Then there is collaboration, not bound to the same premises anymore: I see these use cases, but I doubt people, once the will have gone back from iPads and the like to (now really) mobile "full" pc's/macs, will really need or profit from web storage, except for select, very minor datasets, and Towne didn't need such a thing as "collaborative screenwriting software" for "Chinatown", whilst for that incredibly unbearable German TV crap for the masses, today's writers probably are asked to use such software: they then get a pittance each, Town got paid decently.
And Bayer-Lanxess probably didn't only made at least two very gross mistakes in the HR department, but also in their information management, and probably continue to do so.
1
2
