Posts Tagged ‘security’

John Kleinberg: „Challenges in Mining Social Network Data: Processes, Privacy and Paradoxes”


ePresenceTV – Challenges in Mining Social Network Data: Processes, Privacy and Paradoxes

slide#12: LiveJournal is much smaller than Facebook, but it’s a blogging site – it’s completely open. It wants to be crawled, it wants to be indexed. It becomes a model system.

slide #13: Would I join a service when I have 3 friends using it, which are connected or which are independent? Would I believe info from 3 independent persons or connected?

slide #20: Music Lab: 8 parallel instances of a music download site with feedback (rank, recommendations) -> in each instance songs at the top (download, rate) varied.

slide #24: To hack an anonymized network (just nodes & edges) attacker can add some nodes (create some accounts) and edges (send messages) before the network is anonymized and afterwards geo-locate his nodes… and others… aim of the attack: privacy breach

slide #29: added (by attacker) nodes create random graph (with edge probability 1/2) H which is unique in small worlds (easy to locate)

slide #30: after locating H -> locate target nodes the attacker linked to earlier

slide #31: if unplanned, you + 6 random friends can carry out an attack (without need of adding spam nodes before anonymization) -> you can comprise ~10 users


odświeżanie formularzy a transakcje


kontynuując dyskusję (a potrzebuję HTML, więc przenoszę ją tu – ciekawe czy pingback w jogger’ze zadziała?):

[…] Wystarczyłby unikalny identyfikator transakcji — jeśli odświeżam stronę i każę wykonać transakcję o identycznym identyfikatorze co już wykonana — system banku powinien zaprotestować

ha, właśnie w tym problem, że identyfikator tworzony jest po stronie serwera, dopiero jak dane zostaną przesłane… a jeśli są ponownie przesyłane, to co wtedy? nowe id?

dlatego właśnie w opisanej wcześniej sytuacji timeout‚u w mBanku czy Allegro przy refresh‚u Firefox ostrzega jakoś tak:

POSTDATA warning

dlatego identyfikatorem powinno być coś wysyłane razem z formularzem… skoro nie same dane, to może timestamp wciśnięcia Submit?

no tak, ale <FORM> to HTML, który nie musi być dynamiczny… a dlaczego tego nie zrobić dynamicznie? hmmm, bo to już co najmniej JavaScript, a tego boją się Ci od security… w sumie XForm by chyba wystarczył, ale jeszcze żaden browser go nie obsługuje domyślnie…

bottomline: …nigdy tego nie robiłem i tylko tak teoretyzuję, ale wydaje mi się, że logicznie 🙂

P2P backup


while listening to IT Conversations | Jon Udell’s Interviews With Innovators | Phil Libin on EverNote I got this idea inspired by latest P2P services like money lending…

I try to replicate my most valuable data into several locations, but there’s always the threat that several is not enough. Lately, online servers are most common storage for my personal data. And, though I believe in e.g. Google’s backup policy, I felt like I should do more about it. So… how about replicating backups via P2P?

Just like in file swapping (backup can be 1, archive file) duplicate you precious data in as many locations as possible. The only difference is that it’s more like push (upload) than pull (download) in traditional swapping.

– So who’d like to host my data and what’s his business in it?
– Well, JWL sang: „I scratch your back, you scratch mine”.

– And what about privacy?
– Isn’t PKI enough? And „there’s alway a bigger fish”… than Blowfish 🙂

Also some specs I can think of now:

  • a backup may be fragmented (if too large) – also, since we need instant backup and rarely a recovery , fragmenting may be an optimal solution – and so it is in e.g. BitTorrent
  • if snapshot replication is not enough, incremental approach may also be supported
  • a priority of a backup may be set by it’s owner and respected by the network (community) for distributing and storage lasting
  • there may be a TTL for each archive fragment
  • the number of distributed locations you may use is proportional to the array size you offer for hosting (may be a part of a disk partition 🙂

that’s the idea – maybe I’ll get back to it later…

Aaaaahhh… should’ve known I was scooped > … I still haven’t got that „GGL first” reflex yet 😛

‚secret server page’ @Matt Cutts Discusses Webmaster Tools


cleanin’ up my phone I’ve found this movie, which I’d saved for later, ’cause:

some time ago a friend of mine was publishing docs at a ‚secret URL’ – „if no one links to it, it’ll remain secret -> you can’t google it” – he said… I always felt it is not correct but couldn’t justify it properly… here’s an explanation why it is not a good security approach (01:15-02:20 of this video)…