Advice on research software

Started by Wayne K on 8/16/2013
Wayne K 8/16/2013 9:46 pm
I appreciate the many excellent posts on this forum. I'm hoping to get some advice on how to handle an upcoming research project on the Battle of the Little Big Horn.

There is a huge amount of source material available on this topic. I'd like to create a database that will allow me to filter and sort the material in different ways. Text will be broken up into short sections so they can be easily re-arranged. Each text segment will have the usual basic fields (tags) such as author, title, date, etc. Each text segment could be assigned to dozens of topic fields.

I have several PIMs (Ecco Pro, Rightnote, Tree Projects, Cinta Notes etc). Maybe one would be appropriate for this job but I worry that they're going to be overwhelmed by the volume of material (tens of thousands of records). What PIM would be up to a task like this or should I be looking at a database like Access?

I realize that what I'm asking has been covered in various ways on many different threads. I'm not asking for a rundown of all the software options. I'm stuck on the issue of PIM vs database and am hoping to avoid going down the wrong path and wasting time. Any thoughts would be appreciated.

Wayne
Stephen Zeoli 8/17/2013 12:20 am
Have you read Dr Andus's excellent blog about how he uses Connected Text for this kind of work. If you have not, you can find it here:

http://drandus.wordpress.com

Steve Z.
Wayne K 8/17/2013 12:29 am
Yes, Steve, I've read his blog. I may very well go with ConnectedText but I was hoping to get some feedback on whether anyone has tried to use a regular database program for the kind of work I'm describing. I'm thinking it wouldn't be the right tool based on the database examples I've looked at. They seem to lend themselves to organizing information that is easily formatted into lists. I just wondered if they can also be used as I've described to organize research notes.


WSP 8/17/2013 7:13 am
I've tried various kinds of note-taking software through the years for writing books, but at the moment Evernote is my preferred solution. Just last week, for example, I was studying a large body of archival material at a library a considerable distance from home. Fortunately the library allowed me to use my iPhone camera, and I took as many pictures as possible, combining the images into PDF files with an app called CamScanner.

Now that I'm back home, I pop those PDFs into Evernote, which indexes them all very efficiently. In many cases I also create additional notes on this material in Evernote (i.e., I place the PDF on the left side of my screen and Evernote on the right side). I OCR the PDFs in PDF-Xchange before I embed them in Evernote, but strictly speaking that's not necessary, because Evernote does its own recognition. (It even recognizes handwriting, though not very reliably.)

In fact, even when I'm sitting in my study or in a library closer to home, I often snap a picture of a paragraph or two in a book or journal, and within a few minutes after I insert that photo in Evernote, the text becomes searchable. I find that this approach saves me a lot of time otherwise devoted to typing.

Organization is not Evernote's strength (though note-taking certainly is), but I've found a solution that works more or less satisfactorily. I create links to individual notes (very easy to do) and insert them into a note called "Outline: Chap. 1," etc. There they can be manipulated into something resembling a traditional outline if that's what you want.

Bill
22111 8/17/2013 12:47 pm
We know there's dedicated software for it, and we know it's expensive. Since you post here, let's assume you don't want such a solution.

Then let's clarify your needs, since there are several solutions but not necessarily for every demand.

If I understand your problem, you'll have records in the form

header paragraph
some paragraph, coded a, c, d
some other paragraph, coded c, d, f
and many other paragraphs, coded d, f, m, p... in any combination

another record with its header
some paragraph, coded n
some other paragraph, coded a, n, t
and so on

Let's assume, you'll have 5,000 of such records.

Now you need "reports", in the form A?:
"All records containing paragraphs that are coded a, or coded n, AND coded f, but not coded t"
So you need Boolean search in order to identify records containing paragraphs coded in a specific way.
Result would be a records list here.

But perhaps you also need "reports", in the form B?:
"All paragraphs coded a, or coded n, AND coded f, but not coded t; together with an indication of their source"
So here you need a gathering function putting all your paragraphs (not records) in a list, meaning the respective paragraphs only (but not the other paragraphs in those records), but together with the respective "header" of the record containing each paragraph.

Do you need "report" A? And/or B? Others?
What forms of "reports" would be mandatory, which forms would just be "good to have"?

Wayne K 8/17/2013 2:10 pm
Bill, it's interesting how you've been able to overcome one of Evernote's weak points. For this particular project though, I think I"m going to need something different. Organizing will be paramount.

22111, what software are you referring to when you say there's dedicated software and it's expensive? I'm willing to get something expensive if needed (as long as we're talking $500 expensive, not $50,000). For your examples, I would need both reports A and B. Some of what I need will emerge only after I get into the process of entering the notes. So I would need the ability to add fields after records are completed.

It's been a long time since I've worked with this material so I'm not able to give you a laundry list of exactly what kind of output I want. I worked on this 25 years ago using a Kaypro II and a word processor. I did all the sorting manually. Those notes are still useful. I would like to upgrade what I did before and have all the material in a single program that I can use for research. When I'm done, I should be able to filter the material by any of the fields and create reports that summarizes all the testimony on any topic.

An example of a report would be "Indians' first sighting of Custer". The report would be similar to a spreadsheet with each row having the individual's name, tribe, date the account was given, and a summary of what was said. This report could then be sorted by any of the headings (name, date, tribe, etc)

I've been assuming that I would try one of my PIMs for this project along with looking at ConnectText and maybe other academic research software. Then I saw the thread here titled "List of ALL info-managers with custom attributes!". This thread included a list of free-form databases. I looked at some of these but didn't do any trials. It set me to thinking that maybe I should be looking at a database solution. That's what prompted this thread.

I'm probably not making myself clear. All I'm really asking is whether anyone has experience using a free-form database like Access to organize research notes. If so, how did it work out?
MadaboutDana 8/17/2013 4:25 pm
Hi Wayne,

I've had experience of designing databases using both relational databases (like Access, which is certainly NOT free-form), and genuinely free-form databases (like Blackwell Idealist and, to a lesser extent, FileMaker, which comes into a category of its own for reasons I will explain).

You certainly could organise your research notes using a relational database like Access - as long as you know what you're trying to achieve. At first glance, working methodology and draft structure you describe are compatible with a modern relational database, provided you know precisely what kinds of reports you're going to want to generate.

However, be aware that configuring the database (or rather, multiple databases/data tables) to your requirements will be a fairly lengthy and potentially complex process. Furthermore, relational databases aren't really optimal for searching through large quantities of text (although they're great for assembling specific chunks of data according to specific output schemes). It all depends - as I've already said - on precisely what you want to achieve with the reports.

It sounds as if something like Blackwell Idealist would be better suited to your needs. But you would also do well to investigate systems like EndNote which, although it calls itself a citation manager, is actually rather more than that (this is true of most citation management software, in fact - they have become the equivalent of bibliographical databases, one of the greatest and best of which used to be Blackwell Idealist, now, alas, defunct).

If you like to devise and control your own output schemes, however, you could take a closer look at FileMaker (which has the additional advantage of being cross-platform). Although FileMaker is a relational database, in practice it has a number of features associated with other types of database. It can handle very large amounts of text, it is capable of running searches across all fields in a record simultaneously - but also of narrowing down/expanding a given set of search results - and it has a number of idiosyncratic features like the ability to manage multiple values in a single field. I've used all these features in various databases I've built in the past, including an admin database, a linked document management system, and various terminology databases (again, linked to the admin database).

If you want to customise your own DBMS, FileMaker Pro might well be the best way to go. You can download a trial from filemaker.com

Having said that, I would indeed recommend that you read Dr. Andus's excellent ConnectedText tutorial, just in case you decide that the precision planning and extensive development time associated with a full-scale DMBS like FM Pro is not really your cup of tea...

Cheers,
The Other Bill
ML 8/17/2013 4:38 pm
I may be able to help. I have founded a local history society for the small town and which is on-line only. Soon after I launched the website, with a view to it becoming an encyclopaedia, I was invited by a national publisher of local history books to write a book about the town. As you can imagine, there is a huge amount of information and while I had much content there were and still are numerous gaps in my own knowledge let alone research.

The style of the book is 90 old photos (sepia) with 90 photos (in colour) of the same scene today or thereabouts, with captions for each photo. The word 'caption' is a tad misleading, it is actually a short paragraph, about 6 lines of text. I wanted each caption to be informative and waffle-free, unlike some books in the same series where it is apparent the authors couldn't think what to say about!

To collate the information for the captions, I had in mind to use Scrivener but after a while found it was a non-starter. The problem is key-wording and searching. For example, the word 'church' occurred dozens of times so a search would result in dozens of different entries. Using the outliner feature didn't help: I could enter a parent topic Church A but would then have to drag and drop each child sub-section to A manually. Sorting is possible but again not ideal. As for random entries, I'd need to know which parent topic would apply first.

The last person to do a book on similar lines took a year to finish it, i had 3 months. Having whittled down the choice of old photos from the 300 or so I had gathered, it was then a question of in what order. I decided upon compass directions and starting outwards into the town centre and in each direction for the roads and streets in order so that the reader would have a pictorial experience with captions that would follow on naturally.

It would have helped had I read the publisher's required layout for the book (book plan) before I finished so I didn't have to start all over' with just 24 hours to spare. However, thanks to the workflow I'd creating, the necessary changes were easy.

For the photos, I used Lightroom 4. I created 3 smart folders, search criteria on keyword. Folder1 contained the photos for the front and back covers and the introduction. Folder 2 the old photos and Folder 3 the new photos. As well as the keywords for each photos, I entered each photos number ( my choice of number, not software generated) in a spare field and prefixed with 0 for sorting, eg 001, 090, etc Having selected all the photos I made a copy of each and renamed them, suffix O for old and N for new. For example, High Street-4-07N, High Street-4-07O. (It was only after checking the book plan to make sure I'd compiled everything as required that I discovered the publishers wanted me to use suffix A and B!)

For the captions, I considered using Lightroom but it is not a word-processor and I should've need to know what I wanted to include as I went. What I needed was a sophisticated note-taker so I used Filemaker Pro' database. I designed a layout with fields for all the info that I needed with sorting as well. I tried to have only one record for each topic but quickly gave up,because that meant searching each time I wanted to add new text. Having completed my research and added text to a caption field and with each record already containing the Lightroom image number, I sorted the database to whatever field order I wanted for a particular photo, cut and paste text from a notes field from records whose information was otherwise identical so that I ended up with one record for each pair of photos; that record containing all the info I need to write a caption for that pair of photos.

Having completed the captions in any order,depending upon my mood and the availability of info, I sorted the records to the sequential image number. The publishers wanted the caption in a word doc file so I copied the photo ref and caption onto a word processor, saved the file as .doc and that was that.





Wayne K 8/17/2013 5:05 pm
Thanks, "Other" Bill. That's what I was looking for. I think I'll do a trial of Filemaker and set up a small sample database that I can experiment with.

Wayne K 8/17/2013 5:09 pm
ML, I appreciate the detailed explanation of how you did your photo research. I'll have to think about how I could apply some of your approach to my own text research. It sounds like you found Filemaker useful, which seconds Bill's recommendation.
MadaboutDana 8/17/2013 5:27 pm
Splendid. Thanks for the contribution, ML - interesting. I've not used graphics in FM Pro, but I know it has copious support for them (I was recently reading an interesting article about how one of the major studios uses FM Pro to manage the foreign-language versions of their films - most unexpected!).

FM Pro is an interesting application, with a vast range of potential uses. If you get the Advanced version, you can create your own apps - including apps for iOS which don't need to be approved by Apple (all users need is the - free - FM Pro 12 app from the App Store).

You can even create outliners with FM Pro (I know, sad geek that I am, I've actually sat down and done so, in a much earlier version - 7.5 from memory).
22111 8/17/2013 5:40 pm
I don't see how such a task could be done with the Adobe product. I also jumped 30 cm up when reading "Access, a free-form database". Dr Andus could easily give you 2, 3 or 4 names of dedicated software (CT not being one of them as far as I know, but might "do" it, Dr Andus knows it thoroughly, so he only could tell), but those are in the range of 1,000 to 3,000 dollars / euro, except for students (which have the problem that in most cases, their cheap versions do expire rather soon, it's not as with MS and such).

Technically, a relational database can do it, but it was on purpose I put those examples here: The problem with relational databases is that with such a task, they don't work well but when you atomize your texts into paragraphs, which means you will lose the context: With "1 paragraph = 1 record", it's quite another thing than what most people really need there, and which is "1 paper = 1 record, but then, be able to freely gather paragraphs from everywhere, not losing their respective source references".

I don't know the defunct free-form database mentioned above, but I know askSam, and I'm 100 % POSITIVE that it can do it, per so-called "reports", but only if you invest some time into the re-arrangement of your coding (not: coded) data, but with the "global replace" function, before entering the data into AS, or even within AS, this should be possible.

Since I pretend it's possible, I also need to say that yes, you would have to "code" every paragraph as a (multiply-occuring) "field", and then, the "header info" as another (unique) field, and then, with proper coding, gathering all relevant paragraphs from anywhere, together with their respective "headers", is possible.

Records would look like this:

#HeaderFieldIdentifier[respective data of the header; could be divided up into several fields, e.g. all in one line, or in several of them
Attention: In order to be able to change / add fields in AS afterwards, have (even empty dummy) fields in such lines; it's (except for external scripting) the only way to add fields between other fields, example: field1[askjfsdfas] dummyfield2[(left empty)] field3[asdfklhaskflhsafkhsd]
Now you can insert a field "2a" by replacing
] field3[
by
] newfield2a[] field3[

This is extremely primitive, but at least it works, and whenever AS is the only program that's able to execute your task, such headaches suddenly become acceptable.

Further down, the "real content" of each record:

t[text text text, even for several lines
]

t[again text, text]

t[again text, text...
text...]

This is ugly, but it is the only way of doing it as far as I know, and this way, AS is able to do it, by your taking advantage of AS' ability to process identically-named "fields", in its searches, and in its "reports".

AS has been moribund for years, but is in current development, and, depending on the size of your material, you would need the "prof." version, e.g. for 5,000 records with something between 3 and 100 paragraphs, but perhaps you will just have 500 such records, and even for 1,000 records, the "standard" version will amply suffice (the only difference being the lacking search index here, but any results are identical, just take a little more time then); also, AS is regularly on bitsdujour, so buying the "standard" version full price, then buying the "prof." version on bits some other day should be a viable policy.

This is a cumbersome but working solution for this task.

The only alternative I know of is scripting, meaning you put your data into separate files, or all your data in one text file, and then you put together macros that work on this stuff.

Of course, this requires some scripting ability, and worse, you will have very long lines, not paragraphs, and no way to have formatting like bolding and such.

That's why you should spend some hours with trialling AS, with your imported data.

As for coding the data there, for each paragraph = "t[" "field", this could be done in the form "#28", "#ac", etc, as part of the first line of these paragraphs / multi-line "fields" and then searching for those "near" each other whenever you need them in combination.

Many people continue to work with AS each day, in spite of numerous problems with that software (do lots of backups; don't search for a forum anymore; perfect search is by command line only, but the respective commands can always be found in the web) - the above use is one of those where AS excels or even is unique.
22111 8/17/2013 5:54 pm
I forgot, in order to prepare your data for import into AS, you would need an editor or a text processor or something in which you would be able to replace

blankline

by

] return then the blankline and another return and then t[

which is not possible within AS. In short, you need MS Word or another of those innumerable text processors that allow for command characters within the replace function.
22111 8/17/2013 6:08 pm
Also, in the above example, the empty dummy field "2" is not necessary, since you could do the replace with the end-"]" of field "1" instead, but keep an eye on having such dummy fields as first, and as last field within a line, whenever it might be necessary to add another field there later on.

It goes without saying that the inflexibility with fields is one of AS' biggest shortcomings, let alone any "re-arranging" of fields, being strictly impossible, but as said, it preserves your paragraphs' context, which relational databases do not, so all this is annoying but becaomes "acceptable" in the end.
MadaboutDana 8/17/2013 6:12 pm
On the contrary, it's perfectly possible to assign contexts to individual paragraphs in a modern relational database. That's because it's perfectly possible to auto-assign tags, sequential numbers, codes extracted/summed from multiple fields etc. There's quite a lot of work involved, as there is in any kind of structural definition (your askSam example makes the same point, as it happens; I've worked with askSam, which is not dissimilar to Idealist, but without the clever scripting language Idealist used to have ). Once you've set up your structures, you can be sure of systematically and consistently obtaining the same kind of output repeatedly, which is where relational databases excel. Again, I emphasize that obtaining best results from an RDBMS is all about knowing what you're trying to achieve.
MadaboutDana 8/17/2013 6:14 pm
Also worth mentioning that FM Pro supports rich-text fields... although it doesn't import e.g. RTF with all formatting, unfortunately.
Wayne K 8/17/2013 7:35 pm
Yes, I'd already been thinking about the problem of maintaining contest if the material is broken up into snippets. You could partly get around this by assigning text needed for context to the same field as the "primary" text.

Re Access as a free-for database: I originally wrote "free-form databases and conventional databases like Access.". I changed it because the previous thread had it listed under free-form. Should have gone with my first thought.
Stephen Zeoli 8/17/2013 8:59 pm
Another option to consider is Zoot. It combines the "spreadsheet" grid view you describe, with a full editor window. Its smart folders can be used to gather information as you need it. Worth a look.

Steve Z.
Dr Andus 8/17/2013 9:04 pm
Wayne K wrote:
Yes, Steve, I've read his blog. I may very well go with ConnectedText
but I was hoping to get some feedback on whether anyone has tried to use
a regular database program for the kind of work I'm describing.

Here is an example of someone who chose MS Access over CT. You might find this interesting.

http://jostwald.wordpress.com/2012/11/15/to-tweak-or-to-chuck-that-is-the-question/

A general question I'd ask when evaluating software for this kind of project is how easy it is to import tens of thousands of notes, and then export them after processing (besides the issue of how to analyse and organise them, which was already touched upon).

It's also an interesting question whether or not CT could be right for this project. While importing tens of thousands of small notes sounds theoretically doable, and CT's properties/attributes/categories can take care of most of the required organising (and then there is still the additional wiki feature of linking, as a way of analysing), and there are good search and reporting functions, adding properties/attributes/categories manually to tens of thousands of notes sounds like a gargantuan task.

Having said that, if you want to explore the viability of doing all this in CT, I'd suggest signing up and asking the folks on the CT forum, as there very well may be imaginative shortcuts to accomplish the above.

Another thing to keep in mind is that CT v. 6 is on the way (the beta is already making the rounds). One new killer feature is going to be the so-called "named block", which will allow one to mark up sections of text (i.e. what is called "coding" in qualitative data analysis) across the notes database, and then gather those segments in a separate document. This is exactly the same feature that those "expensive" QDA software have (such as NVivo, Atlas.ti, QDA Miner etc.).
22111 8/17/2013 10:22 pm
CT appears very interesting, albeit I remind the (possibly needed) functionality of not only gathering paragraphs, but together with their respective "source info" (be that in a special first paragraph of that record or elsewhere).

As for relational databases, I perfectly understand that you could affect additional attributes and such to your records, in the form "records 1024 to 1038 all share a given attribute, making them a group, and their order in that group is specified by their record numbers", and I also understand there could be a view in which these records 1024 to 1038 are all listed together, in a single pane.

But it appears more "natural" to me to have some "paper" as a unit/record, then break up its elements when needed, then breaking up such "papers" to begin with, and then recombining their elements into "group views", but as you say, technically, this is perfectly possible, and also, to combine a "global" "source" record (here, number 1023) with such groups, and also, with any single "content" records in numerous combinations.

But I also assume that in such an environment, any coding or other editing / "thinking" about those mini-records will be rather cumbersome, meaning you will see a combination of records 1023 to 1038 in one pane, but I fear that for any editing of one of them, you'll have to do that in an extra pane where it's only record 1029.

Perhaps I didn't grasp better possibilities in what you say, perhaps you could give specifics how to work with such bits in real life there?

But I'm not into pushing AS "at any price": As for its defunct competitor which seems to be "better" or more apt for this task, is there any chance to obtain it somewhere? (Even with discontinued development?)

From my experience, trying to obtain defunct software from ebay (even worldwide) takes years and is not successful anyway, except in very rare exceptional cases, when it comes to rare software.

Btw, there is a German bibliographic software, Citavi, that gathers bits of text into new listings, but I fear it will not do it in the way that would be needed here.

MyInfo has got a function that allows for referencing single paragraphs, but no gathering of them whatsoever. In any case, it's a functionality that would be very helpful for many purposes in traditional 2-pane outliners, and of which the programming would be rather simple, but which is not implemented often - I don't know Zoot too much, so I cannot say if Zoot might indeed execute this task.
22111 8/17/2013 10:52 pm
Above it's "But it appears more "natural" to me to have some "paper" as a unit/record, then break up its elements when needed, THAN (not: then) breaking up such "papers" to begin with, and then recombining...", of course.

With respect to AS, I forgot to mention that it even could be possible to have numeric values / value ranges in paragraphs to search for / according to which you select a given paragraph or not, but I'm not sure of this: This works for records, of course, but not necessarily for paragraphs, meaning coding in the form #28, @abc WITHIN a paragraph = field is without problems in AS, but putting that coding info into an extra field there would then, within the report, select that field, not the corresponding "text" field - it goes without saying that within a relational database, this would be possible, though (btw, in MyInfo, with attributes, it is not even possible on the record level, let alone any paragraph level).

In a software like AS, though, you could at least have codes like #ac1, #ac2 and #ac3, then "search for" / select all paragraphs with "#ac2 or #ac3 or °ac4" - this is all far from perfect (meaning it will probably be impossible to get paragraphs "where #ac>1 and #ac
Armin 8/18/2013 2:27 pm
Wayne K wrote:
There is a huge amount of source material available on this topic. I'd
like to create a database that will allow me to filter and sort the
material in different ways. Text will be broken up into short sections
so they can be easily re-arranged. Each text segment will have the
usual basic fields (tags) such as author, title, date, etc. Each text
segment could be assigned to dozens of topic fields.

Sounds like the work I do. I collect huge amounts of text data from Web, Books, and Documents etc.
For years now, I mainly use Zoot Software for this kind of text analysis, filtering and sorting. Zoot includes anything, what I need (tags, built-in-fields, user-created fields, folders for smart filtering etc.). I did not find any better text database tool so far. By the way, Zoot can handle not only text, but also images.

22111 wrote:
Btw, there is a German bibliographic software, Citavi,

I use Citavi, too. Citavi has been developed as a reference manager for organising academic research literature and keeping track of your quotations. But in the meantime, Citavi becomes also an idea manager and an outliner for organising your own ideas as well as your quotation pieces. Because of its structure as a reference manager, I use Citavi only for quotations and small info-snippets, not for huge text storage and organisation. For text filtering and sorting there is Zoot.
Just my experience
Best regards
Armin

Armin 8/18/2013 3:40 pm
22111 wrote:
We know there's dedicated software for it, and we know it's expensive.
Since you post here, let's assume you don't want such a solution.

22111 wrote:
could easily give you 2, 3 or 4 names of dedicated software (CT not
being one of them as far as I know, but might "do" it, Dr Andus knows it
thoroughly, so he only could tell), but those are in the range of 1,000
to 3,000 dollars / euro, except for students (which have the problem
that in most cases, their cheap versions do expire rather soon, it's not
as with MS and such).

You mean software for qualitative data analysis (QDA-Software)? Examples of this kind of software are MAXQDA and ATLAS.ti
Indeed, they are really really expensive. I had the chance to test MAXQDA in university for some time and it is made for qualitative academic research: analysing interviews (transcriptions) or documents (now even videos or images). For text analysis, their focus is on coding words or paragraphs as 22111 illustrated above. QDA-Software let you organise and categorize your data according to your own code preferences.

Although Zoot is by no means QDA software, you can do quite a lot of qualitative text analysis with Zoot, too. Of course, Zoot lacks special features like visualization, presentation or quantitative analysis of your coding results. However, the price of Zoot, which has a lot more features, is very small, when you see the prices of MAXQDA or ATLAS.ti.

Best regards
Armin

Dr Andus 8/18/2013 8:53 pm
22111 wrote:
CT appears very interesting, albeit I remind the (possibly needed)
functionality of not only gathering paragraphs, but together with their
respective "source info" (be that in a special first paragraph of that
record or elsewhere).

I'm not sure what the final implementation of this feature is going to be, but at least in CT v.6 beta you can do a query for a given "code" (text with which passages have been marked up), which then gathers all the marked-up passages (e.g. in a new document).

Above each gathered passage there would be a hyperlink saying [Edit], and if you click on that, you are taken back to the source document of that particular passage. If you hover over the link with the mouse, the status bar does tell you the name of the source document before clicking.
Bernhard 8/19/2013 9:06 am


Dr Andus wrote:
22111 wrote:
< ... >

I'm not sure what the final implementation of this feature is going to
be, but at least in CT v.6 beta < ... >

It may be OT but is there any news about CT v6? Development started last year in July and there is only a beta forum with no information about a planned publishing date. I don't want to buy an (possibly soon outdated) upgrade to v5.