Try it now, there was an oddity in the auth file for the path (I forgot the leading slash), although I was able to commit before changing it, but then again, I have full /lucene permissions.

-Grant
On Nov 18, 2009, at 4:51 PM, Simon Willnauer wrote:

No luck here! I guess it is not on my side, I tried on 3 machines
(Linux and Windows) I always get the same error:
svn: Commit failed (details follow):
svn: Server sent unexpected return value (403 Forbidden) in response
to CHECKOUT request for
'/repos/asf/!svn/ver/783110/lucene/openrelevance/trunk'

No idea why it tells me something about CHECKOUT when I try to commit though.
Can you look at the authz files for SVN, would be good if we can solve
this issue somehow :)

simon
On Wed, Nov 18, 2009 at 10:40 PM, Grant Ingersoll wrote:
Simon,

Any luck on this?

Do you want me to try the patch?

-Grant
On Nov 14, 2009, at 7:06 PM, Simon Willnauer (JIRA) wrote:


[ https://issues.apache.org/jira/browse/ORP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12778023#action_12778023 ]

Simon Willnauer commented on ORP-1:
-----------------------------------

Grant, I tried it on US and EU. I always get the same stupid error.
I googled a bit and found some possible issues that maybe the URL in the authz file is slightly wrong (Upper / Lower case issues). Are you able to check this?

simon
Use existing collections for relevance testing
----------------------------------------------

Key: ORP-1
URL: https://issues.apache.org/jira/browse/ORP-1
Project: Open Relevance Project
Issue Type: New Feature
Components: Collections, Judgments, Queries
Reporter: Robert Muir
Assignee: Simon Willnauer
Attachments: ORP-1.patch


I created a list of existing collections with queries and judgements on the wiki here: http://cwiki.apache.org/ORP/existingcollections.html
These can be downloaded from the internet.
(please add more if you know)
I've created source code (ant and java) to download these collections, and reformat them to the trec format that the lucene benchmark expects.
each collection has its own ant script to download the collection, and java code to reformat, although I have some shared code at the top level.
The resulting output for each collection is a "corpus.gz" file, a queries file, and a judgements file,
The corpus.gz is a gzipped file that can be indexed with ant via the lucene benchmark package (using TrecContentSource)
Once the index is created, the command-line tool QueryDriver under the lucene benchmark quality/trec package can be used to run the evaluation.
It will print some summary output to stdout, but will also create a submission file that can be fed to trec_eval.
For starters, I will only have support for one collection in the patch, the Indonesian "Tempo" collection (around 23,000 docs)
We can simply add subdirectories for additional collections (it does a contrib-crawl like thing).
Once I finish wrapping up some documentation (such as description of the formats, some javadocs, and an example), I'll upload the patch.
These formats are actually documented in the lucene-java benchmark package already, but I think it would be nice to add this for non-java users.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
--------------------------
Grant Ingersoll
http://www.lucidimagination.com/

Search the Lucene ecosystem (Lucene/Solr/Nutch/Mahout/Tika/Droids) using Solr/Lucene:
http://www.lucidimagination.com/search

Search Discussions

Discussion Posts

Previous

Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 19 of 24 | next ›
Discussion Overview
groupopenrelevance-dev @
categorieslucene
postedNov 13, '09 at 1:45a
activeNov 19, '09 at 5:54a
posts24
users3
websitelucene.apache.org...

People

Translate

site design / logo © 2019 Grokbase