There is an interesting panel to open the Microsoft faculty research summit featuring Rick Rashid, Daniel Reed, Ed Felten, Howard Schmidt, and Elizabeth Lawley. Lots of interesting ideas, but one that got thrown out was the recent idea that maybe the world does only need five (cloud) computers. If something like this really does happen, then perhaps we’ll need to think even more aggressively about the information sharing issues — is there some way for me to make sure that I only share with (say) Google’s cloud the things that are absolutely needed. Once I have given some information to Google, can I still retain some control over it. Who owns this information now? If I do, how do I know that Google will honor whatever commitments it makes about how it will use or further share that information ? We’ll be exploring some of these questions in our “Assured Information Sharing” Research. Some of the auditing work that MIT’s DIG group has done also ties in .
Archive for the 'Policy' Category
A UMBC led team recently won a MURI award from DoD to work on “Assured Information Sharing Lifecycle”. It is an interesting mix of work on new security models, policy driven security systems, context awareness, privacy preserving data mining, and social networking. The award really brings together many different strains of research in eBiquity, as well as some related reserach in our department. We’re just starting off, and excited about it. UMBC’s web page had a story about this, and more recently, GCN covered it.
The UMBC team is lead by Tim Finin, and includes several of us. The other participants are UIUC (led by Jiawei Han), Purdue (led by Elisa Bertino), UTSA (led by Ravi Sandhu), UTDallas (led by Bhavani Thurasingham), Michigan (Lada Adamic).
The well known Labor Economist visited UMBC last week to give a lecture in our humanities series. Borjas is very well known in political circles for his economic analysis of immigration. More importantly, not only does he write scholarly papers, he actually blogs in a way that folks like me who haven’t even done ECON 101 can understand his points. I haven’t read any of his papers to see what they look like, but in his blogs he is fairly clear about his opinions on various issues related to immigration. See for instance this interesting post about “protectionism” on broadway! I don’t always agree with what he has to say, but it is always a pleasure to read well written posts that say something reasonable backed with some data and analytic rigor.
So I went to the lecture with great anticipation. I reached a few minutes late, and the room was already full. The presentation itself was good, but a bit of a letdown. Perhaps because he didn’t want to be too controversial in a “distinguished lecture” type setting ? He presented data (increase in immigration since 1964, concentration of that immigration in select areas making the effect local, confounding factors when you try to analyze wage effects of immigrants, the fact that the wage depressing effects of immigration have most hurt the lower strata of society, the fact that an average immigrant today earns less than the native born, which is a change from the 60s and so on). However, he didn’t go much further by saying something which is both true and a copout — namely that what policy implications you derive from this data will depend on what your objective function is. He joked about letting everyone in if the goal was to alleviate world poverty or somesuch.
I also noticed that he did not split his data into effects of legal and illegal immigration. It would be interesting to know if there are differences ? Amongst legal immigrants, does employment based versus family based immigration make a difference ? Especially when one of the things that the now dead “comprehensive immigration reform” bill was discussing was a points based system for immigration.
A good read at http://stopbadware.org, it seems to be a MEGA campaign by Google, Levono and Sun Microsystems.
“Several academic institutions and major tech companies have teamed up to thwart ‘badware’, a phrase they have coined that encompasses spyware and adware. The new website, StopBadware.org, is promoted as a “Neighborhood Watch” campaign and seeks to provide reliable, objective information about downloadable applications in order to help consumers to make better choices about what they download on to their computers. We want to work with both experts and the broader internet community (.orgs and .edus) to define and understand the problem.”
The Workshop on Models of Trust for the Web (MTW’06) will be a one-day workshop held on May 22 or 23, 2006 in Edinburgh in conjunction with the 15th International World Wide Web Conference. Tentative deadlines are January 10 for paper submission and February 1 for acceptance notification.
“As it gets easier to add information to the web via html pages, wikis, blogs, and other documents, it gets tougher to distinguish accurate information from inaccurate or untrustworthy information. A search engine query usually results in several hits that are outdated and/or from unreliable sources and the user is forced to go through the results and pick what she/he considers the most reliable information based on her/his trust requirements. With the introduction of web services, the problem is further exacerbated as users have to come up with a new set of requirements for trusting web services and web services themselves require a more automated way of trusting each other. Apart from inaccurate or outdated information, we also need to anticipate Semantic Web Spam (SWAM) — where spammers publish false facts and scams to deliberately mislead users. This workshop is interested in all aspects of enabling trust on the web.”
The Semantic Web and Policy Workshop (SWPW) held at ISWC had some great presentations and discussions on policy-based frameworks for security, privacy, trust, information filtering, accountability, etc. The SWPW web site has the proceedings, papers, presentations and some pictures. Watch for announcements about a related workshop on Models of Trust for the Web that will be held at WWW2006.
Rob Clyde, Vice President of Technology, Office of the CTO @ Symantec Corporation presented his keynote today morning. Along with the usual security stuff he reported on some interesting statistics —
- Phishing is becoming an increasing threat as 3 to 4% of users respond to such mails — much higher than traditional e-mail spam.
- In the first half of 2005 phishing increased from 2.99 Million e-mails/day to 5.7 Million e-mails/day.
- 31% of online consumers are buying less due to increased web security threat.
- US leads in the number of hacked machine reports followed closely by Germany.
- Broadband penetration is actually increasing security threats. Many personal machines are now vulnerable to hackers using them as web bots for DOS attacks.
- DOS Attacks are now a business. Such attacks are now available for as low as US $300. Where?
Some other interesting comments ..
- The increasing speed at which worms propogate are now demanding better use of proactive measures.
- In the absence of such measures Akamai and it’s expandable bandwith pipes are the only solution against DOS Attacks. Looks like more revenues to Akamai in the days to come! Maybe Akamai’s stock is in for a ride.
Finally, and of importance to us — Symantec is now working on compating web (and blog) spam. They see this as being one of the next big security threat.
- policy-based frameworks for the semantic web for security, privacy, trust, information filtering, accountability, etc.
- applying semantic web technologies in policy frameworks for application domains such as grid computing, networking, storage systems, pervasive computing and specifying agent communities norms.
In addition to presentations of nine submitted papers, Ora Lassila will give an invited talk on “Applying Semantic Web in Mobile and Ubiquitous Computing: Will Policy-Awareness Help?” and a panel of policy researchers will initiate a discussion of “The 2005 Web Policy Zeitgeist”. The proceedings is available and participants can register at the online.
UMBC website now publishes RSS for news and Podcasts.
Good move – subscribed!
Atleast now I will follow what should have been regularly checked by all students at UMBC.
UMBC is providing a “sneak peek” of its new homepage, which begins the process of redesigning UMBC’s entire web presence by summer 2006. If you’d like to comment on the new homepage or the process for developing the site to follow, use the comment form below.
However I must add, I agree with many of the comments on the blog. It would be nice to see atleast some of these comments incorporated into the new design.
On a similar note, I wonder when UMBC will host blogs of students on the same lines as many other universities.
UCSD Physicist Jorge Hirsch has proposed the h-index as a new bibliometric measure of a scholar’s impact based on the number of publications and how often each is cited. See this story in Physics World for an overview. H-index can be defined as follows:
A person who has published N papers has h-index H iff they have H papers each of which has at least H citations and N-H papers with fewer than H citations.
You can easily estimate an author’s h-index using Google Scholar since the results are ranked (more or less) by the number of citations which are shown in the summaries. Try looking for papers authored by Turing. His 15 most cited papers all had at least 17 citations. His 16th most cited paper had only 13 citations. So Alan Turing’s h-index is 15.
This example, of course, shows one problem with basing this on Google Scholar — it only takes into account papers it finds on the Web, a disadvantage for Turing. Another is that Google doesn’t eliminate “self citations” — citations where there is an author common to both the cited and citing papers. Accepting self citations invites gaming the system by always citing all of your earlier publications. Citeseer is a web based system that does eliminate self citations as does ISI‘s the venerable citation database. But CiteSeer doesn’t rank author queries by citation number and also weights them by year. ISI’s coverage for Computer Science is not comprehensive and access costs money. So Google Scholar seems to be the easiest way to play with the h-index idea for CS at present.
Google Scholar and Citeseer automatically discover and index papers of all types — journal, conference, book chapter and even technical reports — unlike traditional citation databases like ISI’s. Should all of these be contribute to a scholarly output metric? I think it’s not unreasonable. A technical report cited by 50 other papers has obviously had impact. Moreover, a paper’s visibility on the Web may become the dominant factor in its significance.
Hirsch argues that h is better than other commonly used single number criteria to measure a scholar’s output. He’s even suggested it could be used for tenure and promotion
Moreover, he goes on to propose that a researcher should be promoted to associate professor when they achieve a h-index of around 12, and to full professor when they reach a h about of 18. (Link)
What counts as a high number will vary across disciplines and even sub-fields within disciplines. Moshe Vardi tells me that Computer Scientists with h>50 are rare and Jeff Ullman’s number in the mid-60s is the highest he’s seen.
Finally, single number measures like this are always just shadows cast on the wall of a cave.
“While overall graduate enrollment in science and engineering programs reached an all-time high in fall 2003, it actually declined 3 percent in computer science. CS was the only field to see a drop and this was its first decrease since 1995. In addition, CS experienced the biggest drop (23 percent) among S&E fields in the number of full-time students with temporary visas who were enrolled for the first time.”