Mostrar mensagens com a etiqueta Web 2.0. Mostrar todas as mensagens
Mostrar mensagens com a etiqueta Web 2.0. Mostrar todas as mensagens

29/10/08

Web 1.5: "social networking" entre o email e a Web 2.0


«SOE (social online environments, social networking sites, por ex. Hi5, Facebook, MySpace, Friendster, Linkedln, Orkut, Ringo, etc), são ferramentas de software social disponibilizadas num website de redes sociais, que permitem a cada utilizador criar um perfil de si próprio (através de descrições, fotos, listas de interesses pessoais) e construir uma rede pessoal de relacionamentos sociais que o conecta intencional e selectivamente com outros utilizadores pertencentes à sua rede pessoal ou outras redes pessoais e de interesses pessoais comuns, através da troca de mensagens privadas e públicas entre si.
Sites sociais actualmente muito populares como o Hi5, o MySpace e o Linkedln lançados em 2003, ou o Orkut e o FaceBook em 2004, constituem uma actualização e melhoramento das primeiras gerações de software social de comunicação mediada por computador como, por exemplo, o IRC dos anos 90.
O actual software social da Web 2.0 constitui, todavia, um incremento inovacional, possibilitando a conversação em tempo real, a partilha de ficheiros personalizados de imagens, música e vídeo, a videoconferência online com utilizadores de outras redes pessoais. Estas ferramentas oferecem um enorme potencial enquanto espaço dinâmico de sociabilidades, convívio e partilha de interesses, gostos e estilos. E, à semelhança dos seus antecessores, enquanto espaço de convívio e partilha, o software social fomenta quer a manutenção das sociabilidades pré-existentes offline quer a expansão das sociabilidades puramente online»

fonte: OBERCOM, «Web 1.5: As redes de sociabilidades entre o email e a Web 2.0», Flash Report, Maio 2008, pag 3
Retirado de:

29/01/08

Conference: Web 2.0 and Beyond: Applying Social and Collaborative Tools to Business

Conference
Web 2.0 and Beyond: Applying Social and Collaborative Tools to Business Problems, 5-6 March 2008, London www.unicom.co.uk/socialtools

There is an 'Early Bird' price before 18 February, and further discount for students/academics/charities.

BACKGROUND:

There is no doubt that we are now entering the "Web 2.0 generation", a time which especially values dynamic, interactive, open and collaborative. Web 2.0 is changing the way we are living and the way we do business. "The key issues for Web 2.0 users will be around authenticating who you are dealing with, as the technology is increasingly used for business purposes, including customer feedback" (Philip Virgo). However, a key challenge for them is how to integrate these new technologies within the enterprise and figure out which best
suit their needs, and will make inter- and intra- organisational communications easier and more effective. Meanwhile, although Web 2.0 becomes influential and powerful, a cross-industry survey shows that "33% said not knowing how to measure the impact of the technologies is the most serious challenge to implementing them across their company"
(Computer Weekly).

This conference presents new viewpoints and case studies explaining the value, application (and perhaps some downside) of social tools in a business context. It looks beyond the "why" of Enterprise 2.0 to explain the "how", giving all participants a true learning experience.

PRESENTERS INCLUDE:
David Gurteen, Gurteen Knowledge; Lee Bryant, Headshift; John Davies, BT; Ian Hughes, IBM; Tom Ilube, Garlik Ltd; Adrian Moss, Focus.

CASE STUDIES INCLUDE:
Will Wynne, ArenaFlowwers.com; Clare Reddington, Ed Mitchell Consulting; Joyce Lewis, University of Southampton; Ian McNairn, IBM

For brochure and more information please visit
www.unicom.co.uk/socialtools, or hetty@unicom.co.uk

Kind regards,

Hetty

12/11/07

PhD Workshop: Researching Social Software

PhD Workshop
Researching Social Software28-30 November 2007
Location: Instituto de Ciências Sociais, Universidade do Minho, Campus de Gualtar, Braga, Portugal















Course Leader: Adrian Mackenzie, University of Lancaster

Professor Adrian Mackenzie is Professor at the Institute for Cultural Research, Faculty of Social Sciences, University of Lancaster.

His research interests include anthropology of post-representational thinking and cognition; theories of capacity, individuation, invention and differentiation; wirelessness: cultural politics of infrastructure, embodiments of connectivity and network images; repetition and difference in video and audiovisual technology; new media/technological cultures and practices. See: http://www.lancs.ac.uk/staff/mackenza


His last books are:
• Cutting Code: Software and Sociality, Digital Formations Series, Peter Lang, NY, 2006

• Transductions: Bodies and Machines at Speed, Continuum Press, 2002; reprinted

Description and objective

This PhD-course aims to examine and discuss different versions and definitions of social software and how experiences of relation to others can be understood in social software. It also aims to situate software in terms of processes of production, consumption and exchange, and to discuss different approaches, techniques and difficulties involved in researching software.

Structure of the courseEach session would be 2.5 to 3 hours. Each session would have one or two readings to be done in advance. There are also websites and internet examples that should be consulted in advance. Student presentations would be part of each session.

Background reading for the workshop:
Maurizio Lazzarato (1996) 'Immaterial Labour', in Paolo Virno & Michael Hardt (eds.) Radical Thought in Italy: A Potential Politics, Minneapolis: University of Minnesota Press.

Programme
Session 1: What is social software?
Exploration of different versions and definitions of social software. This would best be done by working with same major examples including ebay, facebook, myspace, flickr, and youtube. The session would centre on close analysis of the visual and material cultures of examples. The reading for this session comes from a well known internet commentator, and publisher, Tim O’Reilly.

Session 2: Living with social software: self-other relations and sociality
Key example: Facebook or Second Life
Main focus of this session will be on how experiences of sociality, of belonging, of relation to others can be understood in social software. The readings analyse this from very different angles. The first is informed by Marxist thought, the second by social studies of technology.

Session 3: Social software in technological economies
Key example: Google
This session will situate software in terms of processes of production, consumption and exchange. The readings offer very different perspectives on this. Barry and Slater’s article comes from sociologies of science and technology. Benkler’s work comes from liberal political thought.

Session 4: Researching social software
Session on different approaches, techniques and difficulties in researching software.

Registration and contact:

The application deadline is 20th November 2007.
Please send by email a short description (no more than one page) of your PhD project, specifying your name, email address, affiliation, supervisor, your particular interest in the seminar and why you would benefit from attending it, to the organization committee (social.software.portugal@gmail.com). Number of participants: max. 15.
A fee will be charged for participation to cover administrative costs, tea/coffee and lunches and one dinner during the seminar. .The fee is 60 euros, payable on the first day of the seminar in cash (an official receipt will be given). Travel and accommodation are the responsibility of the participant.

For more information, contact:

José Pinheiro Neves (social.software.portugal@gmail.com)

Zara Pinto-Coelho (social.software.portugal@gmail.com)


See also:
http://socialsoftware-portugal.blogspot.com/
http://neves.do.sapo.pt/mackenzie/PhDSeminarMackenzie25Set07.pdf

Organization:


Centro de Estudos de Comunicação e Sociedade
Centro de Investigação em Ciências Sociais
Universidade do Minho, Portugal


..................................................................................................................................................

Chamadas para candidatura
Workshop para Estudantes de Doutoramento: Investigando o Software Social.
28-30 Novembro 2007
Local: Instituto de Ciências Sociais, Campus de Gualtar, Braga
•Professor convidado: Adrian Mackenzie, Universidade de Lancaster(http://www.lancs.ac.uk/staff/mackenza)
Descrição e objectivo: Este workshop visa examinar e discutir diferentes versões e definições do software social, e o modo como as experiências de relacionamento com os outros podem ser compreendidas neste quadro. Também visa situar o software social em termos de produção, consumo e troca, e discutir diferentes abordagens e técnicas envolvidas na investigação sobre software social.
Estrutura: Cada sessão terá a duração de 2.5 a 3 horas. Pressupõe a leitura prévia de um ou mais artigos e a consulta de sítios na internet.
Fazem parte de cada sessão apresentações dos projectos de doutoramento dos participantes.
Sessão 1: “What is social software?” Exemplos: ebay, facebook, myspace, flickr, and youtube
Sessão 2: “Living with social software: self-other relations and sociality”. Exemplos: Facebook or Second Life
Sessão 3: “Social software in technological economies”
Session 4: “Researching social software

Candidaturas e contactos:
As candidaturas devem ser feitas até dia 20 de Novembro de 2007. Para o efeito, os interessados deverão enviar electronicamente uma descrição sucinta (não mais de uma página e meia) do projecto de doutoramento, especificando o seu nome, endereço electrónico, filiação institucional, nome do orientador, o interesse neste seminário e os benefícios que o mesmo lhe trará, para a comissão organizadora (social.software.portugal@gmail.com).
Aceitaremos 15 participantes.
O custo da inscrição, que cobre custos administrativos, chá/café, almoços e um jantar, é de 60 euros, a entregar no primeiro dia do Workshop (será dado um recibo oficial).
As viagens e a estadia são da responsabilidade dos participantes.
Para mais informação, contacte:
José Pinheiro Neves (social.software.portugal@gmail.com)

Para mais detalhes sobre o programa e exigências do curso, consulte:

09/11/07

Bibliografia para o workshop - Entrevista com O' Reilly

People Inside & Web 2.0: An Interview with Tim O’Reilly

in: http://www.openbusiness.cc/2006/04/25/people-inside-web-20-an-interview-with-tim-o-reilly/

Tuesday, April 25th, 2006

OpenBusiness spoke with Tim O’Reilly about the evolution of the Web and its most current trends, which are commonly labeled as Web 2.0. In September 2005, Tim wrote a seminal piece that presented many of the aspects of Web 2.0 and now surrounds much of the buzz around a new generation of internet applications. In the interview, he re-emphasizes the most important points of this development, talks about the evolutionary relationship between open & free and shares his vision of bionic systems that combine human and computational intelligence.

OB: At OpenBusiness, we’re especially interested in the rise of open content and open services and how they deal with the concept of “free”. How do you define that relationship? When are open and free the same and in what ways are they different?

For the last couple of year, I’ve been preaching an idea that Clayton Christensen first wrote about and called the “Law of Conservation of Attractive Profits.” We talked about it in response to my talk, the Open Source Paradigm shift, in which I focused a lot on lessons from the IBM PC.

What I saw was that IBM – through genius or accident or both – introduced this new, open architecture for a personal computer: anyone could build one and that was open hardware. It was not Open Source as we know it today but it was pretty close. IBM said, “Everything has to be built with off-the-shelf parts from at least two suppliers, here is the specification, now go out, be fruitful and multiply.” The unintended consequence of that decision was that it took all the profits out of assembling computer systems, which had been the source of great profits in the past. IBM was a completely dominant company and now we have low-margin players like Dell. But we also ended up with high-margin players like Intel and Microsoft, neither of which IBM foresaw. They signed a deal with Microsoft to do the operating system, Intel got control of a key component and ended up with near-monopoly profits, all while IBM struggled for many years. They have come back now but they had destroyed the computer industry as they knew it, replaced it with a new one, and there was a period other where –at least from the point of view from IBM – all the profits were disappearing from the system.

So when I started seeing comments by Ballmer saying Open Source is an intellectual property destroyer and it’s taking all the profits out of the system, I thought this is just what had happened before. We’re seeing the commoditization of software where the value is going out of many classes of software that people used to pay for. But it’s being rediscovered and moving up the stack and it’s moving down the stack. That led me to the couple of new ideas that we now call Web 2.0: the Internet as a platform, information businesses using software as a service, harnessing collective intelligence – that’s moving up the stack. Down the stack is what I call “Data as the Intel Inside.” This stack model is repeating itself as this economic model is repeating itself, and so I think that each time you see something becoming free, something else is becoming expensive, which goes back to the Law of Conservation of Attractive Profits.

Software became free, content even became largely free but now Google and Yahoo are collecting enormous sums of money by directing attention to their free content using a platform that’s built on top of their free software. Similarly, we look at Napster and thought that all of music would be free and now Apple has a billion dollar business selling songs. We’re also just a the early stages where Skype is making telephone calls free and Asterisk and making telephone calls free –relatively speaking– and I believe that there will be new sources of revenue that will be overlaid on top of that market.

I also think that it’s really easy early in a market with distributive innovation to see everything becoming cheap or free or commoditized and not to see the areas where there are new sources of control and new sources of revenue.

OB: Especially in the context of Web 2.0 business models, there has been a lot of emphasis on the ad-based model, which now supports everything from Wi-Fi to your mail account. What other layers do you see on top of that and are there alternate models that emerge?

Oh, absolutely – it actually goes back to this idea of “Data as the Intel Inside”. We look at all these mapping applications for example, in which Navtech and TeleAtlas are licensing data to Google, Yahoo and MSN where those companies are monetizing it by advertising but the data suppliers are monetizing it by license. The economic ecosystem is often much more complex than what people realize because I don’t think that it’s just an ad-supported market.

Ads are one way of collecting money but they’re far from the only way and if you look at the complexity of the web ecosystem, there are all kinds of people who are participating. All of those free bloggers are actually paying their blogging service provider or their ISP for hosting, as an example of the different models that start to work together and build any complex ecosystem.

OB: As you mentioned before, much of Web 2.0 is about user-generated content and harnessing collective intelligence. What were some of the catalysts that drove the web in this direction recently and what has sparked these recent shifts?

I wouldn’t say that anything really sparked it. Instead, we talk of network effects, by which networks grow as a result of the value of the connections they make. The internet always had this characteristic that its value was driven by the number of nodes and all the emergence of user-generated content and harnessing collective intelligence is just an expression of that fundamental dynamic.

What really happened was that the original Web had all of these characteristics: it was from the edges, it was bottom-up, it was long-tail. But then we had this detour where traditional content companies and people who are imitating traditional content companies decided that it was all about publishing, “content is king” and that this would get all the eyeballs that would be monetized by advertising – that was the dot-com boom and bust. But when the dust cleared, you saw that some companies had managed to survive. Pets.com was gone but here was Yahoo, here was Google, here was eBay, here was Amazon. All these companies that survived and we asked ourselves back when we first coined the term Web 2.0, “What distinguishes them?” In one way or another, they had rediscovered the logic of what makes Internet applications work – they had understood network effects.

Overall, there are certainly defining moments. For Google, it was Overture coming up with the advertising model, which put together Google’s user demand engine with a financial model. There was also the insight that you don’t just study the contents of documents but what people do with them as evidenced by the links they make.
If you look at eBay, it’s pretty clear that they had leveraged network effects in a fairly fundamental way too. Pierre [Omidyar] has this idealistic vision of a system he’s building in which buyers and sellers learn to trust each other.

Amazon also is a great example I keep bringing up because their system didn’t have a built-in architecture of participation; but they still worked it! On every page, they invite their users to participate, to annotate their data and to add value. They effectively overlaid an architecture of participation on a system that doesn’t intrinsically have one. In many ways, I think they’re the best company to study because they worked it whereas the other companies mostly locked into a sweet spot.

So as far as turning points go, the real one came when Tim Burners Lee introduced the world-wide web and everything else has just been a voyage of discovery.

OB: Since those earliest days, the Web has been an open platform but over the years, especially more recently, there has been the emergence of companies like Google and Yahoo that have started to centralize more and more data, attention and now also user-generated content like photos and videos. Is there are an increasing trend towards more centralization on the Web today?

Yes and No. On the one hand, the Web is extraordinarily good at decentralizing data: everyone has their own website with their own location and storage. Some sites have managed to become large aggregators for a certain class of data, such as the various photo sharing sites or music sharing sites for example.

But when you really think about centralization vs. decentralization, the biggest aspect of centralization actually comes via large-scale aggregators like Google – because it doesn’t matter whether you put your data on Google or on your own site: you’re still putting it on Google in the end as they’re indexing everything.

The real lesson is that the power may not actually be in the data itself but rather in the control of access to that data. Google doesn’t have any raw data that the Web itself doesn’t have, but they have added intelligence to that data which makes it easier to find things.

To me, one of the seminal applications that made me think seriously about the Internet as Platform was Napster in contrast to MP3.com. I had visited MP3.com not long before Napster appeared and they were proudly showing me their servers with “all this music” on them. But then the kid who grew up in the age of the Internet came out with Napster and asked, “Why do you need to have all this music in one place? My friends already have it and all we need is our set of pointers.” It’s that evolution from data to metadata that’s really interesting to me and where people are going to get access to it.

There are some cases where a certain type of data is hard to generate, as in Digital Globe launching a satellite to supplement the US satellite data or NavTech driving the streets for 500 millions dollars plus to build a unique database –that’s one source of control. But the aggregators – the Yahoos, the Googles, the Amazons – are the other type of control with data that they don’t actually own but which they control with the namespace or the search space or some higher-level metadata.

I think that we’ll find in some ways that this is the real secret of the relationship between free and non-free content. There will be so much free content that it’s going to be hard to find and those who can help you find what you want will be able to charge for it – in one way or the other, whether it’s through advertising or through subscription or something else. It’s about managing to find “the best”, and “the best” is a kind of metadata.

OB: What developments potentially worry you in this space?

First off, I think there will always be negative developments. All new technology goes from its wonderful use when all things seem possible and then, [Tim laughs] we get the blue screen of death – that’s a natural alternation. When bad things happen, they’re just a part of the evolution and of the ongoing cycle.

What worries me the most are governments getting involved and backing their existing companies. The patent system is a great example where the government is clueless and is disrupting the real activity of the market. We see it in the way that the Digital Millennium Copyright Act is trying to protect the interests of existing players while stifling the future. All of this is going to drive innovation to markets in countries that are more forward-looking because the internet is of course a global phenomenon and if you outlaw something, it will simply crop up somewhere else. So our challenge as an industry and as an economy is to discover the rules by which we can create value and ultimately create wealth in this new environment. It’s not about protecting the old ways of creating wealth but rather that creative destruction has to take place. Although companies may suffer from it, I think we’ll all be better for it.

OB: What upcoming developments excite you most and what do you see missing currently which you’d like to see grow?

I have been thinking a lot about “bionic software”, a concept that was introduced by You Mon Tsang Juman Zeng with his start-up called Boxxet, by which people are becoming components in software. I’ve talked about this for a number of years and I believe that Amazon’s Mechanical Turk might have been indirectly inspired by a talk I gave there in May of 2003. I talked about the Turk and asked, “What are the differences between web applications and PC applications?” Web applications have people inside of them. You take the people out of Amazon and it stops working. It’s not a one-time software artifact, instead it’s an ongoing process where people have to do things everyday for the software to keep working. So I referred to the Mechanical Turk, the chess-playing hoax which had a man inside, as a metaphor for the difference between internet applications and PC applications.

Amazon has given it a new twist and so have many other applications by harnessing the users to perform tasks that you couldn’t do with just the computer. And there is a really interesting thread there because for a long time, many people thought that we were going to arrive at some kind of artificial intelligence where we get the computers to be smart enough and match people. And what we’re doing instead is building a hybrid system, in which the computers make us smarter and we make them smarter – that’s bionic software.

When Google gives you 10 results and says, “One of these might be what you’re looking for”, it leaves us with the last mile. When a website uses a little CAPTCHA block, it’s asking that we do something that’s easy for humans but hard for computers when it comes to authentication.

The tag cloud also, which has spread from Flickr to all kinds of other websites, is a user-interface element that is basically built by the users of the system as the system is being used. So we are the software component that generates the tag cloud – we’re the input – and the tag cloud is a metaphor for this new kind of software.

OB: And to close what’s been a fascinating interview, I’m curious what you saw in the last month or two that stood out to you and sparked your curiosity.

There’s a site that’s essentially a “Hot-Or-Not” for avatars in virtual worlds [http://RateMyAv.com/] where you can put up your character from Second Life or World of Warcraft and get it rated by users just like the Hot-Or-Not site [http://www.HotOrNot.com/]. That was really interesting to me because it showed that the real and virtual are interpenetrating further. We’re going to see many of the things that took place on the web increasingly recapitulate themselves in some of these virtual worlds. There’s a real opportunity because many economic models out on the web could obviously be reproduced. It’s a cool, little signal of a future to come…

21/09/07

Web 2.0:definição, características e exemplos - por Ana Neves

Web 2.0: definição, características e exemplos

por Ana Neves

Julho, 2007


"As ferramentas sociais estão para ficar e vão mudar para sempre a forma como as pessoas usam e esperam poder usar a Internet. Por essa razão, vale a pena tentar perceber mais sobre o que é, afinal, isso das ferramentas sociais e da web 2.0.

Muito se tem falado ultimamente da web 2.0: uma nova "versão" da Internet que vem possibilitar uma interacção mais próxima do tipo a que estamos habituados presencialmente.

A profusão de sites assentes nas ferramentas sociais que compõem essa "nova" paisagem virtual tem crescido exponencialmente. Possibilitam níveis e padrões de interacção, partilha e troca de opinião até recentemente apenas possíveis offline. A imaginação é quase sempre o limite e muitos têm sido os sites que casam o canal "Internet" com as ferramentas sociais para oferecer funcionalidades nunca antes possíveis.

Alguns desses sites:

del.icio.us – local para arquivo e partilha de sites favoritos (site do mesmo género em Portugal: Tags no Sapo)

Digg – site composto por notícias encontradas pelos utilizadores e por eles sugeridas como de interesse / qualidade (site do mesmo género no Brasil: rec6)

Flickr – site para partilha e pesquisa de fotografias tiradas pelos próprios utilizadores (site do mesmo género em Portugal: Fotos no Sapo e no Brasil: 8p)

My Space– comunidade que permite encontrar pessoas com interesses semelhantes e partilhar ideias, fotos e vídeos

Netvibes – crie a sua própria página com o conteúdo de que gosta

Patient Opinion – site onde os cidadãos podem falar sobre a sua experiência em instituições de saúde britânicas

Technorati – pesquisa de blog posts e tagged social media

Twitter – site usado por pessoas em todo o mundo para informarem outras, amigas ou não, sobre o que estão a fazer em cada momento

Wikipedia - uma enciclopédia escrita em colaboração pelos seus leitores

You Tube – site que permite aos utilizadores ver e partilhar vídeos (site do mesmo género em Portugal: Videos no Sapo)

Zoho – aplicações de texto, folhas de cálculo e muito mais disponíveis online para trabalho colaborativo com outros utilizadores (site do mesmo género no Brasil: Aprex)

Estes sites foram listados por ordem alfabética e foram escolhidos com base na sua popularidade mas com o intuito de exemplificar a variedade de utilizações das ferramentas sociais.

Mas, afinal, o que é isso da web 2.0? Segundo a versão portuguesa da wikipedia (ver caixa) "Web 2.0 é um termo cunhado em 2003 pela empresa estadunidense O'Reilly Media para designar uma segunda geração de comunidades e serviços baseados na plataforma Web, como wikis, aplicações baseadas em folksonomia e redes sociais. Embora o termo tenha uma conotação de uma nova versão para a Web, ele não se refere à atualização nas suas especificações técnicas, mas a uma mudança na forma como ela é encarada por usuários e desenvolvedores".

Assim, a web 2.0 tem essencialmente a ver com a criação de ambientes propícios à criação e manutenção de redes sociais (abertas ou fechadas, públicas ou privadas). Este espírito estende-se para além das paredes de um determinado site, sendo que cada vez se mais se observa o estabelecer de ligações entre vários sites com o objectivo de proporcionar funcionalidades adicionais aos membros das respectivas comunidades.

É devido a este objectivo de abertura e transparência que a web 2.0 se caracteriza também, em grande parte, pelo caracter gratuito (da maioria) dos sites e ferramentas e pela criação e disponibilização de APIs (Application Programming Interface, interface de programação de aplicativos) que permitem a comunicação com outros sites. Estes têm, ultimamente, resultado na criação de múltiplos plugins, desenvolvidos essencialmente pela comunidade de utilizadores, e que permitem extender a funcionalidade básica de um determinado site ou aplicação e/ou agregar conteúdo.

Os elementos / funcionalidades geralmente presentes em sites da web 2.0 são:
* blogs (na versão portuguesa, blogues) - sites em forma de diário no qual os textos são apresentados por ordem cronológica inversa
* social bookmarking – sistema de bookmarks (ou, em português, favoritos) acessível a partir de qualquer computador com acesso à Internet e que permite comentá-los e partilhá-los com outras pessoas
* wikis - sites cujo conteúdo é adicionado e mantido por quem o visita
* tagging – possibilidade de associar um (ou mais) termo(s) ou palavra(s)-chave a um item de conteúdo (e.g. texto, foto, bookmark)

* RSS feeds – (RSS, Really Simple Syndication) forma de alertar os membros / visitantes de um site de alterações no seu conteúdo. Estas feeds produzidas automaticamente por muitas das ferramentas disponíveis podem depois ser lidas através de feed readers online (e.g. Google Reader - www.google.co.uk/reader), no desktop (e.g. RSS Bandit - rssbandit.org) ou associados a uma aplicação-cliente de email (e.g. Attensa - attensa.com).
* agregação de conteúdo - disponibilizar num site conteúdo publicado noutros sites com o intuito de facilitar o acesso (e.g. Netvibes) ou de o enriquecer com a opinião de outros utilizadores (e.g. Digg).

Nota: Este texto terá continuação nos próximos meses de forma a trazer exemplos de como as ferramentas sociais podem usadas também no contexto organizacional e como se relaciona com a gestão de conhecimento".


In:
http://www.kmol.online.pt/artigos/200707/nev07_1.html