Browse by author
Lookup NU author(s): Dr Jeff Yan
Full text for this publication is not currently held within this repository. Alternative links are provided below where available.
The Automated Turing test (ATT) is almost a standard security technique for addressing the threat of undesirable or malicious bot programs. In this paper, we motivate an interesting adversary model, cyborgs, which are either humans assisted by bots or bots assisted by humans. Since there is always a human behind these bots, or a human can always be available on demand, ATT fails to differentiate such cyborgs from humans. The notion of "telling humans and cyborgs apart" is novel, and it can be of practical relevance in network security. Although it is a challenging task, we have had some success in telling cyborgs and humans apart automatically. © 2009 Springer-Verlag Berlin Heidelberg.
Author(s): Yan J
Editor(s): Christianson, B., Crispo, B., Malcolm, J.A., Roe, M.
Publication type: Conference Proceedings (inc. Abstract)
Publication status: Published
Conference Name: 14th International Workshop on Security Protocols
Year of Conference: 2009
Library holdings: Search Newcastle University Library for this item
Series Title: Lecture Notes in Computer Science