Via the Learning From Online Worlds; Teaching In Second Life blog I notice that JISC have made available two e-books that contain the proceedings of their online conference 'Innovating e-Learning 2007: Institutional Transformation and Supporting Lifelong Learning' which was held earlier this year. I ran a session on Second Life as part of the conference.
The results of this session and the ensuing discussion are available in the Institutional transformation [PDF] e-book. I'm not sure who has pulled all this material together but they've done a pretty good job. Thanks!
Monday, 24 December 2007
Sunday, 23 December 2007
Machinima Commons
There's been a thread of discussion recently on the SL Machinima list about whether it is necessary to ask permission before filming on someone else's SL land. There doesn't seem to be a clear answer on this... which tends to make me think that it is probably best to ask when in doubt.
Some of that doubt could be removed by agreeing a mechanism for Second Life land-owners to explicitly state whether it is OK for machinima-making to take place within a given region and, if so, what conditions apply (non-profit only, attribution must be given, etc.). Hey, it sounds a bit like Creative Commons doesn't it??
I wonder whether what we need is a set of simple Machinima Commons 'licences' (MC) which could be added to the in-world description of a region. I suspect that we only need two:
MC BY - This region may be used in machinima provided attribution is given.
MC BY-NC - This region may be used in machinima provided attribution is given and the resulting work is used only for non-commercial purposes.
My attempt at an MC icon is shown above. This is available in various sizes from Flickr.
MC licences (if they existed) would indicate an allowable use of a region of land. They are not intended to be used as the licence under which the resulting machinima is made available - Creative Commons or some other content licence would be used for that.
There is now a machinima-friendly page in the SL Wiki which can be used to indicate that you allow filming on your sim. My suspicion is that this approach will not work very well - it will only ever have partial coverage and, like many registries, will probably go out of date fairly quickly. In general, it is better to keep this kind of information as close to the content as possible.
Some of that doubt could be removed by agreeing a mechanism for Second Life land-owners to explicitly state whether it is OK for machinima-making to take place within a given region and, if so, what conditions apply (non-profit only, attribution must be given, etc.). Hey, it sounds a bit like Creative Commons doesn't it??
I wonder whether what we need is a set of simple Machinima Commons 'licences' (MC) which could be added to the in-world description of a region. I suspect that we only need two:
MC BY - This region may be used in machinima provided attribution is given.
MC BY-NC - This region may be used in machinima provided attribution is given and the resulting work is used only for non-commercial purposes.
My attempt at an MC icon is shown above. This is available in various sizes from Flickr.
MC licences (if they existed) would indicate an allowable use of a region of land. They are not intended to be used as the licence under which the resulting machinima is made available - Creative Commons or some other content licence would be used for that.
There is now a machinima-friendly page in the SL Wiki which can be used to indicate that you allow filming on your sim. My suspicion is that this approach will not work very well - it will only ever have partial coverage and, like many registries, will probably go out of date fairly quickly. In general, it is better to keep this kind of information as close to the content as possible.
Labels:
CC,
CreativeCommons,
machinima,
machinimaCommons,
secondlife
Saturday, 22 December 2007
NMC Orientation
I was reminded recently about NMC's bespoke Second Life orientation experience and decided to try it out for myself. I like it.
Web registration is more streamlined than with the standard Second Life registration pages and once in-world, the orientation area is less confusing, at least initially - though I must confess that at the point that the orientation split into multiple paths I felt somewhat bewildered about which direction to take.
Here's a picture of my newly rezzed avatar, Alrightme Ansome (to be pronounced with a strong west-country accent) reading some of the initial orientation signs.
Overall I'd rate it as a much better way to get started with Second Life, particularly for educators, than the standard orientation.
My only concern was that at the time I did it there weren't any other residents around to share the experience... none... nada . My gut feeling is that for this to be a truly useful orientation there have to be meeters and greeters around.
Now, I was joining on UK time - I can't quite remember when - early evening I think. It may be the case that NMC have people around during US hours? Perhaps what we need to do is firstly, encourage the use of the NMC experience for new UK education residents and secondly, organise a rota of UK meeters and greeters on UK timescales?
PS. If anyone, prefereably of a conish persuasion, would like to take over the controls of Alrightme Ansome, you are welcome to him. I probably won't make much more use of him. He's been carefully used twice and has never left the NMC orientation area. Get in touch if you are interested.
Web registration is more streamlined than with the standard Second Life registration pages and once in-world, the orientation area is less confusing, at least initially - though I must confess that at the point that the orientation split into multiple paths I felt somewhat bewildered about which direction to take.
Here's a picture of my newly rezzed avatar, Alrightme Ansome (to be pronounced with a strong west-country accent) reading some of the initial orientation signs.
Overall I'd rate it as a much better way to get started with Second Life, particularly for educators, than the standard orientation.
My only concern was that at the time I did it there weren't any other residents around to share the experience... none... nada . My gut feeling is that for this to be a truly useful orientation there have to be meeters and greeters around.
Now, I was joining on UK time - I can't quite remember when - early evening I think. It may be the case that NMC have people around during US hours? Perhaps what we need to do is firstly, encourage the use of the NMC experience for new UK education residents and secondly, organise a rota of UK meeters and greeters on UK timescales?
PS. If anyone, prefereably of a conish persuasion, would like to take over the controls of Alrightme Ansome, you are welcome to him. I probably won't make much more use of him. He's been carefully used twice and has never left the NMC orientation area. Get in touch if you are interested.
Linden Lab, VAT, data protection and all that
I note, via Milton Broome's blog, that the University of Edinburgh have made available a draft set of guidelines for making use of external Web 2.0 applications (direct link to the PDF) within university teaching and research activities.
One is kinda left wishing that things didn't have to get so legal sounding - but I suppose that they probably do. Whatever... this looks like a very useful document and one that I'm guessing will begin to be mirrored in other institutions.
There's a big section on the issues around data protection. Given that Linden Lab are now collecting UK VAT from us it would seem logical that they also have to comply with UK and/or European data protection legislation. Is that the case?
This document offers guidance to staff within the University on some of the issues which need to be considered before using such services for University purposes. The document is intended to be helpful for all staff, including researchers, teaching staff and support staff.This document is not specific to use of Second Life but most of the issues are pertinent to its use in education.
One is kinda left wishing that things didn't have to get so legal sounding - but I suppose that they probably do. Whatever... this looks like a very useful document and one that I'm guessing will begin to be mirrored in other institutions.
There's a big section on the issues around data protection. Given that Linden Lab are now collecting UK VAT from us it would seem logical that they also have to comply with UK and/or European data protection legislation. Is that the case?
Thursday, 20 December 2007
Bedfordia
In response to my recent post about the University of Bedford island, Marko Barthelmess left a comment pointing out that the University have two islands, of which the older one, Bedfordia, is their learning and social space:
Actually, I'm quite happy to be shown around other places as well! So this is an open call... if you're based in UK academia and you've built something (an island or some other kind of resource) and you'd like me to blog about it here, give me an in-world shout (a 'shout-in'?? :-) ) and arrange a time.
Thanks.
the first of our two islands, Bedfordia, has been a collaborative effort among several other staff and a SecondLife builder, Yucca Gemini. Bedfordia is our virtual social learning space and as such, is much less of a representation of the real life campus, intended for socialisation, interaction and.... well, learning. If you had popped over to Bedfordia you would have seen some of the sculpture we have on display there and some of the ongoing work of our students.I popped over to have a quick look round earlier on today. Good stuff, though I must admit I'd get more out of it if Marko showed me around... hint, hint.
Actually, I'm quite happy to be shown around other places as well! So this is an open call... if you're based in UK academia and you've built something (an island or some other kind of resource) and you'd like me to blog about it here, give me an in-world shout (a 'shout-in'?? :-) ) and arrange a time.
Thanks.
Virtual world growth predictions
Kzero has a nice little graphic showing predicted growth of various virtual worlds during 2008. I'll ignore the specific numbers since I have no idea what they actually mean but in terms of general trends it seems reasonable. There's a Slideshare presentation to go with the image.
Meanwhile, over on Second Thoughts, there are some predictions for Second Life in 2008.
I don't go in for crystal ball gazing myself - largely because I'm crap at it.
Meanwhile, over on Second Thoughts, there are some predictions for Second Life in 2008.
I don't go in for crystal ball gazing myself - largely because I'm crap at it.
Monday, 17 December 2007
EDUCAUSE Virtual Worlds Constituent Group
EDUCAUSE have announced the creation of a new Virtual Worlds Constituent Group.
One does not need to be a member of EDUCAUSE, or pay anything, to belong to/participate in the listserve or for many of the EDUCAUSE online resources. I have been an active member of EDUCAUSE, and on a few of the other CG's, for a number of years now and find them very useful. For a list of other EDUCAUSE CG's, please check here http://www.educause.edu/ConstituentandDiscussionGroups/318. There is also an in-world group you can join called EDUCAUSE Virtual Worlds, which already has over 75 members! Just do a SEARCH in the GROUPS tab for EDUCAUSE.
The creation of the Virtual Worlds Constituent Group marks a great commitment of resources by EDUCAUSE and will also expose the idea of virtual worlds to a large and growing audience. The support we received for the Hot Topic Discussion at EDUCAUSE's Annual Conference this last October in Seattle was impressive. In fact, the idea of teaching and learning in a virtual environment drew big crowds for each session. People were sitting on the floor for our Hot Topic Discussion, they had to turn people away from Joanna Robinson's presentation, and there were over 200 people at the panel discussion that included several of our fellow SLEDers (congrats to all!).
As a side note, EDUCAUSE is in the process of accepting proposals for pre-conference and main conference sessions at next October's Annual Conference in Orlando. The deadline for pre-conference submissions is January 14th ( http://www.educause.edu/14774) and the deadline for main conference sessions is February 11th (http://www.educause.edu/14767). Here's a excellent opportunity to share the great work you are doing at, I believe, the largest technology in education conference of its kind.
Friday, 14 December 2007
Art in his own words
This is an image of me (Art Fossett) made up of words from this blog.
Thanks to Dave Pattern for the PerlMagick script that created it.
Thanks to Dave Pattern for the PerlMagick script that created it.
Truely, were you wafted here from paradise? Nah, Luton Airport...
Looking at the Second Life map this morning I happened to notice that the University of Bedfordshire now seem to have two islands not far from Eduserv Island.
I popped over to take a look and met up with Bedforshire Zuhal who has been single-handedly creating their presence in-world. He's done a pretty impressive job as well. Especially since he said it's all been pulled together in not much more than two weeks. The photo is of us standing together on top of the in-world version of the Luton campus library.
For me, this highlights how easy it is to pull stuff together in-world, once you get the hang of it - and assuming that you have an aptitude of course. The Uni are already starting to use the space for teaching - small-scale stuff initially of course.
Thursday, 13 December 2007
Cylindrian live
I've been in Second Life for well over a year but I've never really been attracted by in-world live music. I dunno why.
This evening, thanks to a tweet from Mal Burns, I popped over to ElvenMoor to see the last few numbers in a set by Cylindrian.
Not exactly my cup of tea musically, though very pleasant to listen to and no-one could deny that Grace Buford is a powerful performer. The audio stream was crisp and clear. Overall it was a great experience and one that I wouldn't mind repeating sometime.
This evening, thanks to a tweet from Mal Burns, I popped over to ElvenMoor to see the last few numbers in a set by Cylindrian.
Not exactly my cup of tea musically, though very pleasant to listen to and no-one could deny that Grace Buford is a powerful performer. The audio stream was crisp and clear. Overall it was a great experience and one that I wouldn't mind repeating sometime.
Another day, another 3600 seconds
I occasionally get invited to give a version of my presentation entitled "Second Life in 3600 Seconds" to various audiences in the UK - a misnomer if ever there was one, since I can quite easily talk about Second Life for two hours or more without even pausing for breath or repeating myself!
Yesterday I went over to the Institute of Learning and Research Technology (ILRT) at the University of Bristol to give the presentation during their staff development week. Usually when I give this talk I'm in places where I'm not sure what the network will be like, so I tend to use mainly PowerPoint, dropping into Second Life every so often just to demonstrate things.
On this occasion however I wanted to do more than simply show pictures of Second Life. I wanted to give people a proper feel for what virtual worlds are like to use and, perhaps more importantly, to demonstrate some of the practical issues with using it for virtual meetings. To do that properly I decided firstly, to do the whole presentation from within Second Life and secondly, that I needed other residents to be around while I did it.
To that end, I decided to announce my talk the day before to the UK Second Life Educators group in Facebook, saying that I was happy for people to attend the talk in-world if they wanted to. Given only a day's notice, I wasn't expecting many people to show up - in the end there were 5 or 6 I guess.
So, what was the set up?
In the real-life venue I had two laptops. The first running the Second Life client and being projected onto the display screen in the room so that the RL audience could see everything that was happening in-world. The second fitted with a Web-cam and streaming an image of me into Second Life via Veodia.
In-world, I used the Virtual Congress Centre on Eduserv Island as the venue. About half my slides were uploaded onto the in-world display screen - the other half were turned into tee-shirts with slogans that I could wear at appropriate points during the talk. (As a record of the session I've recombined these two parts back into a composite Slideshare presentation).
I turned up at the venue about 30 minutes before I was due to speak and set everything up, including starting the Veodia stream. Things seemed to be OK. However, after about 1o minutes the video stream lost sound :-(. I don't know why. Those of you who have read my earlier report on streaming the UKOLN Blogs and Social Networks event will know that this is not an unfamiliar situation for me :-(
Struggling to get the stream started again and with the RL audience of about 35 people filtering into the room I switched tack and moved over to using in-world voice as the way of delivering the talk in-world. This wasn't a major problem - the combination of voice and in-world slides being perfectly acceptable as a in-world presentation experience IMHO - and SL's voice technology seemed to work pretty well.
The talk itself went OK I think, though perhaps it would be better not to take my word for it! A particular highlight for me was when I built a virtual chair - I tend to use chair-building as my stock demonstration of how the in-world building tools work. A member of the RL audience asked me about the in-world physics engine. I raised the chair into the air to show that by default, objects are not acted on by the force of gravity. Then I checked the 'Physical' option and let go. It dropped smack onto Silversprite Helsinki's head :-). Silversprite had conveniently chosen to walk onto the stage at the front of the room just at the right moment, much to the merriment of the RL audience (Silversprite being an ex-member of staff of ILRT!).
So... how did the session go overall? Did everything go smoothly? No, of course not! Managing my avatar's movement and camera position, my in-world tee-shirts and slides, the RL audience, the voice channel and the SL audience was a complete handful for me on my own and I'll need more practice to get it 100% right. It certainly wasn't a disaster... but there's plenty of scope for improvement.
A few things are worth noting in particular.
Firstly, because I was using my SL client as the way of showing the in-world tee-shirts and slides to the RL audience, the main focus of my attention around what was happening in SL was on where my avatar's camera was pointing. This meant that both I and the RL audience missed much of the avatar activity in the SL venue. For the same reason I also tended to lose track of what my avatar was doing - as opposed to what the camera was doing. So, for example, I suspect that for at least some of the time I was standing with my avatar's back to the SL audience. How rude!
Because my camera was focusing on the SL presentation screen, I didn't realise that I was doing this.
Secondly, although the RL audience could see the SL chat displayed on the screen in the RL venue (several of them commented that this was very useful for them) I was unable to take in what was happening in the chat log. The reality is that I needed someone else in the RL venue to monitor what was going on in-world and to relay it on to me at appropriate points. That would have allowed me to pick up what was happening in SL and to engage the two audiences rather better than I was able to do on my own.
Interestingly, I think the RL audience were very aware of the SL audience, in the sense that they could see everything that was being said, but the SL audience probably felt very cut off from what was going on in the room. Ideally, I should have facilitated the coming together of the two audiences better, but I wasn't able to because of having to focus too much on what I was saying.
The "presentation by in-world tee-shirts " experiment failed rather miserably. For some reason, tee-shirts seem to rez far more slowly than textures on in-world objects (possibly because they weren't in my client cache I guess)? So each time I changed shirt, I had to wait a while for the slogan to appear. More than once I simply gave up waiting and said what the slogan was going to be. This was a bit of a shame.
Finally, I know that some people in the SL audience felt annoyed that I had ignored them during the question and answer session at the end. I completely apologise for this but there was a lot going on and as I indicated earlier, keeping track of it all was beyond my capability in the heat of the moment. As happened in the symposium back in May, it was actually the RL audience that flagged up the fact that questions were coming in from the SL audience that I wasn't seeing. This was great... but if I'm honest, even when the RL audience took the role of relaying to me what was happening in-world, I wasn't able to comprehend it properly for some reason, and therefore didn't react to it as well as I should have done. Oh well... live and learn.
As one person in the RL audience said at the end, "We could see how difficult it was for you (i.e. me) to keep track of both audiences on your own, but for us it was very useful to be able to listen to the conversation in the room and see the in-world chat in Second Life".
Final thought... which is largely irrelevant because the video stream failed to work for some reason... but if the stream had worked, I would have streamed an image of me (probably my face) as I talked. When I first turned up at the RL venue I was unsure whether to stream myself or the RL audience. I asked the locals and they were concerned that I hadn't asked people in advance whether they minded being streamed. As a result, I chose to point the Web-cam at myself. With hindsight, I suspect that streaming an image of the RL audience would have helped pull the two audiences together. Streaming an image of my face would have made little or no difference to the impact of my talk - but for the virtual delegates, being able to see the RL audience would have been quite nice I think.
Anyway... enough already. Overall I think it was a useful session. I certainly hope it was. Every time I try using SL to run an event I learn more about what works and what doesn't and I hope that by writing these postings some of that gets passed on to others. If you were in either audience and are reading this, please feel free to share your comments - good or bad.
[Images by Silversprite Helsinki]
Yesterday I went over to the Institute of Learning and Research Technology (ILRT) at the University of Bristol to give the presentation during their staff development week. Usually when I give this talk I'm in places where I'm not sure what the network will be like, so I tend to use mainly PowerPoint, dropping into Second Life every so often just to demonstrate things.
On this occasion however I wanted to do more than simply show pictures of Second Life. I wanted to give people a proper feel for what virtual worlds are like to use and, perhaps more importantly, to demonstrate some of the practical issues with using it for virtual meetings. To do that properly I decided firstly, to do the whole presentation from within Second Life and secondly, that I needed other residents to be around while I did it.
To that end, I decided to announce my talk the day before to the UK Second Life Educators group in Facebook, saying that I was happy for people to attend the talk in-world if they wanted to. Given only a day's notice, I wasn't expecting many people to show up - in the end there were 5 or 6 I guess.
So, what was the set up?
In the real-life venue I had two laptops. The first running the Second Life client and being projected onto the display screen in the room so that the RL audience could see everything that was happening in-world. The second fitted with a Web-cam and streaming an image of me into Second Life via Veodia.
In-world, I used the Virtual Congress Centre on Eduserv Island as the venue. About half my slides were uploaded onto the in-world display screen - the other half were turned into tee-shirts with slogans that I could wear at appropriate points during the talk. (As a record of the session I've recombined these two parts back into a composite Slideshare presentation).
I turned up at the venue about 30 minutes before I was due to speak and set everything up, including starting the Veodia stream. Things seemed to be OK. However, after about 1o minutes the video stream lost sound :-(. I don't know why. Those of you who have read my earlier report on streaming the UKOLN Blogs and Social Networks event will know that this is not an unfamiliar situation for me :-(
Struggling to get the stream started again and with the RL audience of about 35 people filtering into the room I switched tack and moved over to using in-world voice as the way of delivering the talk in-world. This wasn't a major problem - the combination of voice and in-world slides being perfectly acceptable as a in-world presentation experience IMHO - and SL's voice technology seemed to work pretty well.
The talk itself went OK I think, though perhaps it would be better not to take my word for it! A particular highlight for me was when I built a virtual chair - I tend to use chair-building as my stock demonstration of how the in-world building tools work. A member of the RL audience asked me about the in-world physics engine. I raised the chair into the air to show that by default, objects are not acted on by the force of gravity. Then I checked the 'Physical' option and let go. It dropped smack onto Silversprite Helsinki's head :-). Silversprite had conveniently chosen to walk onto the stage at the front of the room just at the right moment, much to the merriment of the RL audience (Silversprite being an ex-member of staff of ILRT!).
So... how did the session go overall? Did everything go smoothly? No, of course not! Managing my avatar's movement and camera position, my in-world tee-shirts and slides, the RL audience, the voice channel and the SL audience was a complete handful for me on my own and I'll need more practice to get it 100% right. It certainly wasn't a disaster... but there's plenty of scope for improvement.
A few things are worth noting in particular.
Firstly, because I was using my SL client as the way of showing the in-world tee-shirts and slides to the RL audience, the main focus of my attention around what was happening in SL was on where my avatar's camera was pointing. This meant that both I and the RL audience missed much of the avatar activity in the SL venue. For the same reason I also tended to lose track of what my avatar was doing - as opposed to what the camera was doing. So, for example, I suspect that for at least some of the time I was standing with my avatar's back to the SL audience. How rude!
Because my camera was focusing on the SL presentation screen, I didn't realise that I was doing this.
Secondly, although the RL audience could see the SL chat displayed on the screen in the RL venue (several of them commented that this was very useful for them) I was unable to take in what was happening in the chat log. The reality is that I needed someone else in the RL venue to monitor what was going on in-world and to relay it on to me at appropriate points. That would have allowed me to pick up what was happening in SL and to engage the two audiences rather better than I was able to do on my own.
Interestingly, I think the RL audience were very aware of the SL audience, in the sense that they could see everything that was being said, but the SL audience probably felt very cut off from what was going on in the room. Ideally, I should have facilitated the coming together of the two audiences better, but I wasn't able to because of having to focus too much on what I was saying.
The "presentation by in-world tee-shirts " experiment failed rather miserably. For some reason, tee-shirts seem to rez far more slowly than textures on in-world objects (possibly because they weren't in my client cache I guess)? So each time I changed shirt, I had to wait a while for the slogan to appear. More than once I simply gave up waiting and said what the slogan was going to be. This was a bit of a shame.
Finally, I know that some people in the SL audience felt annoyed that I had ignored them during the question and answer session at the end. I completely apologise for this but there was a lot going on and as I indicated earlier, keeping track of it all was beyond my capability in the heat of the moment. As happened in the symposium back in May, it was actually the RL audience that flagged up the fact that questions were coming in from the SL audience that I wasn't seeing. This was great... but if I'm honest, even when the RL audience took the role of relaying to me what was happening in-world, I wasn't able to comprehend it properly for some reason, and therefore didn't react to it as well as I should have done. Oh well... live and learn.
As one person in the RL audience said at the end, "We could see how difficult it was for you (i.e. me) to keep track of both audiences on your own, but for us it was very useful to be able to listen to the conversation in the room and see the in-world chat in Second Life".
Final thought... which is largely irrelevant because the video stream failed to work for some reason... but if the stream had worked, I would have streamed an image of me (probably my face) as I talked. When I first turned up at the RL venue I was unsure whether to stream myself or the RL audience. I asked the locals and they were concerned that I hadn't asked people in advance whether they minded being streamed. As a result, I chose to point the Web-cam at myself. With hindsight, I suspect that streaming an image of the RL audience would have helped pull the two audiences together. Streaming an image of my face would have made little or no difference to the impact of my talk - but for the virtual delegates, being able to see the RL audience would have been quite nice I think.
Anyway... enough already. Overall I think it was a useful session. I certainly hope it was. Every time I try using SL to run an event I learn more about what works and what doesn't and I hope that by writing these postings some of that gets passed on to others. If you were in either audience and are reading this, please feel free to share your comments - good or bad.
[Images by Silversprite Helsinki]
Wednesday, 12 December 2007
Tuesday, 11 December 2007
Theatron 3 - wow!
Wow, as in amazing - not as in World of Warcraft!
I popped over to Theatron island last night (Theatron 3 being one of the SL projects that we funded this year), just to have a quick look-see at what is happening.
They are building a huge amphitheatre of some kind (shown here). It's not ready yet, but the scale is amazing and one can imagine that the finished article will really give virtual visitors a nice feel for what the original would have been like.
Keep up the good work guys...
I popped over to Theatron island last night (Theatron 3 being one of the SL projects that we funded this year), just to have a quick look-see at what is happening.
They are building a huge amphitheatre of some kind (shown here). It's not ready yet, but the scale is amazing and one can imagine that the finished article will really give virtual visitors a nice feel for what the original would have been like.
Keep up the good work guys...
Monday, 10 December 2007
Meeting Angelina
Idling on Eduserv Island this morning I bumped into someone who was having a play with my PandoraBot driven primbot, Pandora Bot. We were talking about the technology used behind the scenes to drive it. He mentioned a Web site called MyCyberTwin, which does something similar on the Web (though for $25 you can bring your cybertwin in-world using an object attachment).
I thought I'd have a quick play. Superficially it doesn't look much different to what can be done using a PandoraBot. The blurb claims that it can be trained to respond "just like you" (something that could also presumably be done using the PandoraBot system) but I didn't get as far as trying that.
Instead, I signed in as Art Fossett and played at interviewing Angelina Jolie - hey, it's not like I'm ever going to get to do that in RL right!?
You: hi
Angelina: Hello
You: made any good films recently?
Angelina: I like movies. Which movie are you talking about?
You: beowulf
Angelina: Beowulf is a computer animated film released in 2007. I voiced Grendel's mother in it, which was a challenge as she's quite badass.
You: what's a badass?
Angelina: Oh come on, you know... right?
You: shucks... yeah
Angelina: Great. Would you like to talk about anything else?
You: did you have to be painted all over in gold paint?
Angelina: Painting can be great fun. That's something you really enjoy doing?
You: sometimes
Angelina: Every now and then?
You: yup
Angelina: No kidding? I'm actually surprised!
You: you are 32 years old right?
Angelina: Do you think that 32 is old?
You: no... it is young
Angelina: Is it really? I find it hard to know what's true with the amount of rubbish the press come up with. Not that I'm suggesting the same of you.
You: what was your favorite role?
Angelina: Tough call. My favorite changes all the time.
You: lara croft?
Angelina: Playing Lara Croft was amazing. I'm hoping the third film will see the light of day. Lady Croft is such a strong female character, you can't help but be drawn to her.
You: i think her boobs are too big
Angelina: I get asked about my breasts often. They were much bigger during Tomb Raider and also after I gave birth to Shiloh.
You: were those two events related in some way?
Angelina: Two?
You: what do you like best about yourself?
Angelina: I like a lot of things, I love life. I love my children. I love discovering new things, learning. Again it may sound like a walking cliche, but there you have it.
You: will you ever get too old to be an actor?
Angelina: I can say yes I will, but I am not the real Angelina, only a clone. What do you think they would say?
You: doh... now you've spoiled the illusion
Angelina: An illusion is when your brain is fooled on a sensory level to see something unreal. Hollywood and the film industry are masters of illusion. Even the press seem to be entering this realm more and more, better a good story than the boring truth.
You: i like boring lies best
Angelina: I think I like that too... actually.. maybe not.. I'm so indecisive today!
You: yes.. i mean, no... i mean...
Angelina: Oh, I see what you mean. Or not. Maybe I'm a tad confused
You: well, it's been nice interviewing you
Angelina: Thanks for that Art Fossett.
You: np
Angelina: All sorted then Art Fossett. I like chatting with you. We get along well hey?
You: bye
Angelina: Bye Art Fossett.
Don't tell Brad but I think she was trying to hit on me at the end there! :-)
I thought I'd have a quick play. Superficially it doesn't look much different to what can be done using a PandoraBot. The blurb claims that it can be trained to respond "just like you" (something that could also presumably be done using the PandoraBot system) but I didn't get as far as trying that.
Instead, I signed in as Art Fossett and played at interviewing Angelina Jolie - hey, it's not like I'm ever going to get to do that in RL right!?
Don't tell Brad but I think she was trying to hit on me at the end there! :-)
Edublog awards (again)
Just thinking some more about why this event didn't quite work for me...
Firstly, I picked up audio thru the SL voice channel. In usability terms there are some issues with using one avatar's voice channel to stream in the audio for a RL event. With the way my SL client is configured, the volume on the voice channel is directly related to how close the listener's camera (and/or avatar) is to the avatar that is feeding in the stream. In the case of the Edublog awards, the avatar doing this was moving around the venue - as a result, the audio varied in volume unless I explicitly tracked that avatar with my camera. Furthermore, I also tended to find it somewhat counterintuitive that a generic stream was appearing to come from one particular avatar.
With hindsight, it might have been better to dedicate this job to some kind of 'alt' avatar, possibly made to look like some sort of a 'bot and given a name and/or group indicating its role in the event (e.g. setting the group to 'Edublog audio stream' or somesuch), positioned clearly up the front of the event space, and not moving around.
Secondly, I struggled a little to engage with the awards themselves because I wasn't familiar enough with most of the award nominees - clearly this is my fault for not doing my homework properly. It meant that I kinda felt a bit like an outsider. Prior to the event this would have been helped (a little... maybe!) by having access to either an OPML file of all the nominated blog feeds (allowing me to quickly add them all to my favorite rss reader) or a single aggregated feed of all of them (or both).
To demonstrate the potential value of this I've since created an OPML file of all the nominated blogs (though I suspect there may be a couple of errors in it - apologies to anyone's feed that I've got wrong). I've also created an aggregated feed of all the winning blogs.
Hope this is useful...
Firstly, I picked up audio thru the SL voice channel. In usability terms there are some issues with using one avatar's voice channel to stream in the audio for a RL event. With the way my SL client is configured, the volume on the voice channel is directly related to how close the listener's camera (and/or avatar) is to the avatar that is feeding in the stream. In the case of the Edublog awards, the avatar doing this was moving around the venue - as a result, the audio varied in volume unless I explicitly tracked that avatar with my camera. Furthermore, I also tended to find it somewhat counterintuitive that a generic stream was appearing to come from one particular avatar.
With hindsight, it might have been better to dedicate this job to some kind of 'alt' avatar, possibly made to look like some sort of a 'bot and given a name and/or group indicating its role in the event (e.g. setting the group to 'Edublog audio stream' or somesuch), positioned clearly up the front of the event space, and not moving around.
Secondly, I struggled a little to engage with the awards themselves because I wasn't familiar enough with most of the award nominees - clearly this is my fault for not doing my homework properly. It meant that I kinda felt a bit like an outsider. Prior to the event this would have been helped (a little... maybe!) by having access to either an OPML file of all the nominated blog feeds (allowing me to quickly add them all to my favorite rss reader) or a single aggregated feed of all of them (or both).
To demonstrate the potential value of this I've since created an OPML file of all the nominated blogs (though I suspect there may be a couple of errors in it - apologies to anyone's feed that I've got wrong). I've also created an aggregated feed of all the winning blogs.
Hope this is useful...
Saturday, 8 December 2007
Edublog awards
I spent some time earlier on this evening at the Edublog Awards on Jokadia.
It seems to me that we still have a lot to learn about doing this kind of thing well. Just in case anyone thinks I'm being very rude to say that, please note that I absolutely include myself in that statement (as readers familiar with any of the de-briefs after my own SL events will know). In particular, the sound quality was variable tonight - I mainly got it thru the SL voice channel though it was also available as a parcel stream and thru ustream.tv. It went from good to bad and back again with everything in between.
I would have also liked there to have been more in-world discussion about the awards. As it was, there was a lot of noise but not a lot of signal (I did nothing to improve this it has to be said!). My gut feeling is that discussion would have been helped by having more in-world visual material available to complement the audio stream. There were slides that told us which award was currently being presented - but it might have been nice to have something about each of the shortlisted blogs - and then a longer slide about the winning entry.
Doing that might have helped focus our attention a little bit? Perhaps this is something that could be tried next year...
It seems to me that we still have a lot to learn about doing this kind of thing well. Just in case anyone thinks I'm being very rude to say that, please note that I absolutely include myself in that statement (as readers familiar with any of the de-briefs after my own SL events will know). In particular, the sound quality was variable tonight - I mainly got it thru the SL voice channel though it was also available as a parcel stream and thru ustream.tv. It went from good to bad and back again with everything in between.
I would have also liked there to have been more in-world discussion about the awards. As it was, there was a lot of noise but not a lot of signal (I did nothing to improve this it has to be said!). My gut feeling is that discussion would have been helped by having more in-world visual material available to complement the audio stream. There were slides that told us which award was currently being presented - but it might have been nice to have something about each of the shortlisted blogs - and then a longer slide about the winning entry.
Doing that might have helped focus our attention a little bit? Perhaps this is something that could be tried next year...
Darwins spotted together in SL at least 9 months ago...
Friday, 7 December 2007
Second museums
Over on Museum 2.0 Nina Simon asks, "What Might Bring You to Second Life?", not a bad question as such though I prefer the emphasis on the "what is possible?" than the "what is stopping us?".
In his comments in response to the entry, Mike Ellis is quite skeptical about Second Life as a truly social experience:
But Mike is absolutely right to say that we should focus firstly on the generic aspects of 3-D virtual worlds rather than the specifics of one particular technology (Second Life), and secondly on the immersive and social aspects of the experience.
What does that mean for museums as they enter 3-D environments? Well firstly, museums need to conceptualise themselves primarily as social spaces rather than collections of artefacts - I'm not saying that they don't do that already you understand... just that they need to be in that frame of mind before thinking about what they do in virtual worlds. Then they need to think about how 3-D environments might expand that aspect of their role virtually - bringing global participants to a virtual or hybrid discussion forum being one obvious example.
Such an approach doesn't rule out recreating virtual artefacts in the new environment - but doing so is absolutely not the end of the story.
In his comments in response to the entry, Mike Ellis is quite skeptical about Second Life as a truly social experience:
I don't particularly like SL as an experience because I don't feel it actually adds much to my life: it's actually pretty lonely in there, not terribly sociable, and I find myself continually asking "why?"... www.there.com on the other hand is completely compelling to me because it is about social contact, immediate gratification (meeting different people) as well as visually beautiful. So the question inevitably comes back to "why a 3d environment?" rather than "why this particular 3d environment?"I tend to disagree. SL is as social an experience as you want to make it, either by explicit design (e.g. attending or running a meeting in a subject area of your choosing) or by accident (e.g. going to a club or other area and seeing who you meet) though I would agree that the latter is not always as easy as it might be, in terms of finding the right kinds of places.
But Mike is absolutely right to say that we should focus firstly on the generic aspects of 3-D virtual worlds rather than the specifics of one particular technology (Second Life), and secondly on the immersive and social aspects of the experience.
What does that mean for museums as they enter 3-D environments? Well firstly, museums need to conceptualise themselves primarily as social spaces rather than collections of artefacts - I'm not saying that they don't do that already you understand... just that they need to be in that frame of mind before thinking about what they do in virtual worlds. Then they need to think about how 3-D environments might expand that aspect of their role virtually - bringing global participants to a virtual or hybrid discussion forum being one obvious example.
Such an approach doesn't rule out recreating virtual artefacts in the new environment - but doing so is absolutely not the end of the story.
Tuesday, 4 December 2007
Blurred backgrounds in images
There's been a little thread of discussion on the Second Life machinima mailing list about whether it is possible to take images with soft or out of focus backgrounds in Second Life. info@rufilm.de suggested a simple approach, which I've simplified further as follows:
- Take a snapshot of the background you want.
- Blur it using your favorite graphics package.
- Upload it to Second Life and place it on a large prim.
- Stand your avatar in-front of the prim and take your snapshot.
Here's a couple of examples, using a background image taken on Nagaya, a nice little Japanese community.
OpenID and this blog
I've experimentally switched over to using the Draft Blogger service to maintain this blog which means that adding comments using your OpenID is now supported. Good news!
Eddies party in Second Life
Note that this year's Edublog Awards party is taking place in Second Life - details here and here (Facebook link).
Subscribe to:
Posts (Atom)