[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: (off-topic) self-plagiarism



Hi Joe,
I think you've got a very valid point here. I think the best way would be to
share your and other reviewers' concerns with the authors through the editor
and ask them to further explain and clarify the matter. In any fair judiciary
system, people are allowed to defend themselves. I don't see why this shouldn't
be the case in the scientific review process. The editor and the reviewers can
then take corrective actions (if necessary) based on the authors'
clarifications.
Regards,
Ramin
Quoting Joe Sollini <joe@xxxxxxxxxxxxx>:

> Unfortunately I couldn't find the other four.  Of the two I have read he
> does use actual data in one, but as you rightly point in the other he talks
> about it's application but does not actually model any data.  It sounds like
> you have a very good case to suggest this is repetition (self-plagiarism).
> I share your disbelief at how this has happened, it's possibly due to the
> shifts in domain/scientific fields that this model traverses.  Although
> given you found 6 with a google search of the title this shouldn't be to
> much of a barrier.
>
> Joe
>
> -----Original Message-----
> From: AUDITORY - Research in Auditory Perception
> [mailto:AUDITORY@xxxxxxxxxxxxxxx] On Behalf Of Laszlo Toth
> Sent: 10 July 2009 13:27
> To: AUDITORY@xxxxxxxxxxxxxxx
> Subject: Re: (off-topic) self-plagiarism
>
> On Fri, 10 Jul 2009, Joe Sollini wrote:
>
> > Sorry to bring this up again but having had a look for through these
> papers
> > Instead of finding six papers I was only able to find 3 (but six links
> > to papers).
>
> I found seven papers with virtually the same abstract (the one I received
> for review is the 8th). Unfortunately, I have access to the full text only
> in 3 cases (plus the one for review, but I keep that one in secret...), so
> this is why I have to judge based mainly on the abstracts.
> (I can send you a list, maybe you can help me get the remaining ones.)
>
> > If he has a model with a wide range of applications and
> > applies this model to fields as disparate as face recognition and virology
> > it could perhaps be deemed as fitting that they need to be published in
> > journals that people practising in the respective fields read?
>
> I definitely agree with that. But this would require the content of the
> paper to be:
> 1. We claim that we have a new theoretical model and that it is
> applicable to the field of the journal (conference).
> 2. The description of the newly proposed model.
> 3. Empirical justification on data taken from the specific field.
>
> This doesn't hold in this case, as I'll explain below.
> >From the 7 paper titles of 3 says "a fast ... model", and 4 ones go like
> "a fast model applied to the field of ...". So, the titles themselves accord
> with your description above. However, let's move on to the abstracts.
> >From the 7 abstracts 5 starts with the sentence: "This paper
> presents a new approach to speed up the operation of <model>".
> So the topic of the papers (according to the abstract) is NOT the
> application of the model to a new domain, but a theoretical result on how
> to compute it faster than earlier. Although the other two abstracts start
> with "this paper presents an intelligent approach to detect...",
> the remaining text is the same in all seven cases: "it is proved
> theoretically and practically that the number of computation required
> <by the new method> is less than that needed by the <old method>. So
> although the paper titles claim the the method will be tested on a new
> domain, there is no word about that in the abstract!
> Theoretical chapter of the three papers: these are word-by-word the same
> in the 3 papers I have access to. Notice again that the formulas are about
> the speed-up factor (number of operations required) of the method compared
> to the old one, so these again agree with the abstract, not the titles.
> Now, the funniest part: the experiments. In two of the papers the numbers
> are given in diagrams, in one in tables, so I cannot really tell if they
> are different or the same. However, these results are clearly ALL about
> speed-up ratios. So while the titles say that we will apply the method to
> a new domain (virus detection, code detection, record detection, etc.).
> there are NO detection results given at all! Just speed-up results,
> (as promised by the abstract). No proof that it works, only proof that it
> can be faster than before. And the most shocking part: none of the papers
> says ANYTHING about the test data! Only that these are Matlab simulations.
> But it is not stated at all that the data were domain-specific. I simply
> can't believe that these went through a review process. Ah, and finally, the
> Conclusions: its again the same in all papers, stating that "computations
> have shown that <new model> requires fewer compuation stepts that <old
> model>". Which is true, but thas nothing to do with the claim of the
> titles that the model will be applied to a new domain.
> Again, I can't say anything about the remaining 4 papers, but based on
> their abstracts I suspect that they were also "generated" with the "let's
> adjust the title and the first sentence to the domain" method. (which,
> as I said, would be acceptable if the experiments were also adjusted...)
>
>                Laszlo Toth
>         Hungarian Academy of Sciences         *
>   Research Group on Artificial Intelligence   *   "Failure only begins
>      e-mail: tothl@xxxxxxxxxxxxxxx            *    when you stop trying"
>      http://www.inf.u-szeged.hu/~tothl        *
>
>
> This message has been checked for viruses but the contents of an attachment
> may still contain software viruses, which could damage your computer system:
> you are advised to perform your own checks. Email communications with the
> University of Nottingham may be monitored as permitted by UK legislation.
>
>