On a regular Monday morning, you got a video call from a friend. You friend looked so distressed, flustered, and you got really concerned and asked him what happened. Then he’d tell you his 5-year old kid was hit by a car and sent into the ICU. Your friend said he hasn’t got enough money to pay for the treatment. Now he’s crying, and asking if you could lend him money to save his poor son. What would you do?
(相关资料图)
“Of course I will give him the money to save the boy. He’s my friend and he needs my help.”
If that’s what you’re thinking, think again.
Because it could be a scam. The person you saw “flesh and blood” may not be who you think he is. Even though “your friend” was speaking to you on face-time.
Police in the northern Chinese city of Baotou have uncovered a deepfake fraud in which a man was scammed out of 4.3 million yuan, the most so far stolen in China in this way.
According to disclosures by police in Fuzhou in eastern Fujian province, on April 20, a fraudster stole an individual’s WeChat account and used it to make a video call to a businessman named Guo, an existing contact on the individual’s WeChat app.
The con artist asked for Guo's personal bank account number and then claimed an equivalent sum had been wired to that account, sending him a screenshot of a fraudulent payment record.
Without checking that he had received the money, Guo sent two payments from his company account totaling the amount requested, the police said.
"At the time, I verified the face and voice of the person video-calling me, so I let down my guard," the article quoted Guo as saying. But an impersonator had used face-swap and voice-mimicking artificial intelligence technologies.
The man only realized his mistake after messaging the friend whose identity had been stolen, who had no knowledge of the transaction. Guo alerted local police.
At the request of the Fuzhou authorities, their colleagues in Baotou later intercepted some of the funds at a local bank, but nearly 1 million yuan was unrecoverable. The police’s investigations are ongoing.
The case unleashed discussion on microblogging site Weibo about the threat to online privacy and security, with the hashtag "#AI scams are exploding across the country" gaining more than 180 million views on Tuesday, but it was seemingly removed from the internet amid fears that the case may inspire copycat crimes.
"If photos, voices and videos all can be utilized by scammers," one user wrote, "can information security rules keep up with these people's techniques?”
A number of similar frauds have occurred around China as AI technology becomes more and more widely applied. The public security departments of Shanghai and Zhejiang province previously disclosed such cases.
In addition to direct scams, some frauds involve live e-commerce platforms where AI technology is used to replace the faces of live streamers, with those of stars and celebrities, to take advantage of their market appeal and fool people into buying goods, raising related issues around fraud and intellectual property rights.
An illegal industrial chain has formed with people using face-swapping technology for online scams, according to a report by China News Service's financial channel.
A website providing deepfake software services sells a complete set of models that can be used on various live-streaming platforms for only 35,000 yuan, according to its customer service.
Deepfake technology, which has progressed steadily for nearly a decade, has the ability to create talking digital puppets. The software can also create characters out of whole cloth, going beyond traditional editing software and expensive special effects tools used by Hollywood, blurring the line between fact and fiction to an extraordinary degree.
Deepfake has been used in recent years to make a synthetic substitute of Elon Musk that shilled a cryptocurrency scam, to digitally “undress” more than 100,000 women on Telegram and to steal millions of dollars from companies by mimicking their executives’ voices on the phone.
Another example of how eerily accurate their technology is their replication of actor Leonardo Di Caprio speaking at the United Nations.
According to the World Economic Forum (WEF), deepfake videos are increasing at an annual rate of 900%, and recent technological advances have even made it easier to produce them.
Identifying disinformation will only become more difficult, as deepfake technology will become sophisticated enough to build a Hollywood film on a laptop without the need for anything else.
In most of the world, the authorities can’t do much about it. Even as the software grows more sophisticated and accessible, few laws exist to manage its spread.
Many people are starting to get concerned of deepfake and AI-generated content being used for evil. First, this technology can be used for misinformation; for instance, allowing people to believe a politician made a shocking statement that they never did. Or secondly, to scam people, especially the elderly.
In China, AI companies have been developing deepfake tools for more than five years. In a 2017 publicity stunt at a conference, the Chinese speech-recognition specialist iFlytek made deepfake video of the U.S. president at the time, Donald J. Trump, speaking in Mandarin.
But even the AI pioneer now has fallen victim of the same technology it’s been developing. Shares of iFlytek fell 4.26% on Wednesday after a viral screenshot surfaced of what appeared to be a chatbot-generated article that made unsubstantiated claims against the company, fueling public concern about the potential misuse of generative AI.
The potential pitfalls of groundbreaking AI technology have received heightened attention since US-based company OpenAI in November launched ChatGPT.
China has announced ambitious plans to become a global AI leader by 2030, and a slew of tech firms including Baidu, Alibaba, JD.com, NetEase and ByteDance have rushed to develop similar products.
ChatGPT is unavailable in China, but the American software is acquiring a base of Chinese users who use virtual private networks to gain access to it for writing essays and cramming for exams.
But it is also being used for more nefarious purposes.
This month police in the northwestern province of Gansu said "coercive measures" had been taken against a man who used ChatGPT to create a fake news article about a deadly bus crash that was spread widely on social media.
China has been tightening scrutiny of such technology and apps amid a rise in AI-driven fraud, mainly involving the manipulation of voice and facial data, and adopted new rules in January to legally protect victims.
And a draft law proposed in mid-April by China's internet regulator would require all new AI products to undergo a "security assessment" before being released to the public. Service providers will also be required to verify users’ real identities, as well as providing details about the scale and type of data they use, their basic algorithms and other technical information.
The global buzz surrounding the launch of ChatGPT has seen a spate of AI-related product launches in China. However, the Fuzhou fraud case has combined with other high profile deepfake incidents to remind people of the potential downsides to such advances in artificial intelligence.
AI regulation is still a developing subject in China. Initial excitement around the potential of ChatGPT and similar AI products in China has given way to concerns over how AI could be used to supercharge criminal activity.
Tech talk aside. For ordinary people, what can we do to avoid being victim of AI-related scams?
First of all, experts suggest you should be mindful of unexpected and urgent calls asking for money from your loved ones or your work. For example, try to ask them some personal questions to verify their identity.
Second, try using a different source, a different channel. Make up an excuse and say you have to call them back. Then, call back on what you know to be the person's number.
And if you receive such video calls, try to detect if they’re fake through unnatural facial features and expressions. For one thing, unnatural eye movement, and lack of blinking are clear signs of deep fakes. Replicating natural eye movement through body language is harder for deepfake tools. Also, the lighting and the facial features of the image or the video such as the hair and teeth may seem to be mismatched. In the most obvious giveaways are misaligned facial expressions, and sloppy lip to voice synchronizations, unnatural body shapes, and awkward head and body positions.
在一个普普通通的周一上午,你突然接到了一个微信视频电话,视频那头正是你的朋友。朋友看起来非常着急,手足无措,哭着说他的孩子被车撞了,正在ICU抢救,急需一大笔钱救命,恳求你借钱并希望你能马上转给他。
危急关头,你会怎么做?
如果你在想,“当然救人要紧,不能对孩子见死不救,必须马上转账给他”的话,你可能已经落入了“AI换脸”骗局的圈套了。
近日,“AI诈骗正在全国爆发”话题一度冲上热搜第一,引发网友热议。最近,类似AI换脸诈骗案件又发生了,这回甚至“更有效率”——9秒钟被骗走245万。
今年4月,安庆经开区发生一起“冒充熟人”诈骗案,经开公安分局反诈中心民警调查发现,诈骗分子使用了一段9秒钟的智能AI换脸视频佯装“熟人”,让受害人放松警惕从而实施诈骗。
4月27日,何先生的微信“好友”突然向其发起视频通话,电话接通后,何先生看到“好友”正在一间会议室内,就在他准备进一步询问时,“好友”直接挂断了电话,并表示在会议中有重要事情交代,需要何先生添加QQ沟通。
随后,“好友”在QQ上告诉何先生,目前有一个项目招标需要周转资金,希望何先生先行帮忙垫付。“因为打了视频电话又是熟人”,“我就没多想,就转账了”,基于对“熟人”的信任,何先生没有犹豫,立刻让家人将245万元到对方指定的账号上,直到事后拨打对方电话才得知被骗。
接报案后,专案民警连夜行动,于4月28日下午一举抓获李某某等3名犯罪嫌疑人,扣押涉案手机26部,冻结、追回电诈资金一百余万元。5月22日,民警将先行追回的132万元被骗款返还给何先生。目前,该案件正在进一步侦办中。
此前,多地都发生过类似案例。
4月20日,福建福州郭先生的好友突然通过微信视频联系他,称自己的朋友在外地竞标需要430万保证金。基于对好友的信任,加上已经视频聊天核实了身份,郭先生没有核实钱款是否到账,就分两笔把430万转到了好友朋友的银行卡上。之后,郭先生拨打好友电话才知道被骗,骗子通过智能AI换脸和拟声技术,佯装好友实施了诈骗。
“当时是给我打了视频的,我在视频中也确认了面孔和声音,所以才放松了戒备。”郭先生说。幸运的是,接到报警后福建福州、内蒙古包头,两地警方和银行,迅速启动止付机制,成功止付拦截336.84万元,但仍有93.16万元被转移,目前正在全力追缴中。
除了网络诈骗,AI“明星”带货最近也层出不穷,令人傻傻分不清楚。
近期,网上出现了一些“换脸直播”教程。在一个展示换脸直播效果的视频中,使用者把某明星的模型载入相关软件后,摄像头前的人在直播画面中就有了与明星相似的五官,但脸型和发型还保持原样。
“点进直播间一看,’迪丽热巴’居然在直播卖货。”近日,有网友点开直播间,发现正在卖货的竟是当红女星。然而再定睛一看,这些带货“明星”很快露出了马脚——正在卖力带货的“明星”们,其实只是使用了AI实时换脸技术的普通主播。AI实时换脸,正在直播间悄然出现。而杨幂、迪丽热巴、angelababy等当红女星,成为了AI换脸的重点对象。
在一些线上社交平台内,所谓“换脸”实则早已有之,其主要是以“特效”或“道具”的形式存在的。这种初代“换脸”技术的运用,目的都很明确,无非是为了“美颜”“搞怪”“娱乐”。与之相较,最近引发关注的“AI换脸”,则完全不是一回事。一些直播间主播换脸成当红女星,大模大样、堂而皇之地“带货”,这已然超越了“玩笑”的范畴,明显是一种以不正当手段进行商业化谋利的行为。
需要厘清的是,作为生成式人工智能的最新应用,带货主播AI换脸固然是新事物,但却不存在“监管空白”“无法可依”的情况。此类做法,完全符合民法典所规定的“侵犯肖像权”的构成要件,乃是典型的“未经他人同意”“以营利为目的”。
更有甚者,不法之徒用“AI换脸技术”合成淫秽视频,收费供他人观看,甚至根据顾客的需要使用不同的女明星形象进行“私人定制”,还通过出售“换脸软件”非法获利。4月9日,据杭州市人民检察院消息,近日,一80后男子虞某因涉嫌制作、传播淫秽物品牟利罪被提起公诉。
不仅普通人,上市公司也深受AI假新闻困扰,甚至“掐”起来了。
5月24日, 科大讯飞盘中接近跌停,起因直指两篇网传“小作文”。第一篇小作文称,美国正在考虑是否将科大讯飞、美亚柏科等加入“实体名单”,禁止它们使用美国的组件或软件。第二篇小作文称,近期,科大讯飞被曝涉嫌大量采集用户隐私数据,并将其用于人工智能研究。
针对第一个传闻,科大讯飞24日盘后回复称,已于2019年10月被列入实体清单。被列入实体清单后,科大讯飞已迅速切换到以国产供应链为主的非美供应链体系,业务运营未受到重大影响。
而关于第二个传闻,则是“AI诈骗”。相关流传文字及截图,科大讯飞初步判断该段不实信息系某AI生成式软件生成。从流传的截图可知,该软件的标志正是百度的大语言模型、生成式AI产品“文心一言”。
科大讯飞刚刚辟谣,就引来百度文心一言市场负责人张文全在朋友圈怒怼,直指“策划痕迹太重”,“请友商解决好自己的问题,别动不动就碰瓷别人,大家的眼睛都雪亮的。”
对于百度的最新回复,科大讯飞方面暂未作出回应。
实际上,一段时间以来,利用AIGC编造、传播虚假信息已不是“新鲜事”,人类社会已然步入“后真相时代”。
美国时间5月22日上午,一张“五角大楼附近爆炸”的图片在海外社交网络上疯传。据媒体报道,美国国防部发言人证实,这张在谷歌搜索和推特上疯传的图片是一条“虚假信息”。这一虚假信息广泛流传后,美国股市出现了明显震荡,道琼斯工业平均指数4分钟之间下跌了约80点。
AIGC的发展正在给虚假信息治理提出前所未有的挑战。在“越来越像人类”的同时,人工智能所具有的“幻觉”“涌现性”等特点带来的伴生风险也引发各界关注。
AIGC为什么能成为谣言“推手”?南开大学法学院副院长、中国新一代人工智能发展战略研究院特约研究员陈兵教授表示,AIGC技术日渐发展成熟,可以完成包括写邮件、代码、新闻报道以及论文在内的多种任务,且表述内容与人类对话风格相似,一般人难以区分。此外,其使用门槛和成本低,生成效率高,能够在短时间内生成大量虚假信息,从而迅速淹没真相。
中国政法大学数据法治研究院教授张凌寒指出,俗语说“耳听为虚,眼见为实”“有图有真相”,可见照片、视频在公众潜意识中的真实性远胜文字。而深度合成技术则恰恰颠覆了这一公众认知,其可以伪造音频、图片、视频,捏造人物的行为与语言。深度合成技术在近几年迅速兴起,为政治、军事、经济犯罪甚至恐怖活动等提供了新工具,带来严峻的安全挑战。
在北京航空航天大学法学院副教授、北京科技创新中心研究基地副主任赵精武看来,ChatGPT等生成式AI最显著的技术优势就是能够以贴近人类思维和表达的方式呈现信息,这就导致网民更难以甄别其生成信息的真伪,加之人工智能技术能批量生成信息,海量且高效的网络谣言生成,显然会导致官方辟谣、账号封禁等传统虚假信息治理措施难以发挥预期效果。
深度合成、人工智能技术火爆出圈,如何平衡创新发展与风险防范成为必答题。
此前,国家网信办、工信部、公安部联合发布《互联网信息服务深度合成管理规定》,于今年1月10日起施行。规定明确指出,提供人脸生成、人脸替换、人脸操控、姿态操控等人物图像、视频生成或者显著改变个人身份特征的编辑服务,可能导致公众混淆或者误认的,应当在生成或者编辑的信息内容的合理位置、区域进行显著标识。除了显著标识帮助用户区分虚拟与现实,直播平台还应履行责任,从源头杜绝利用AI技术侵犯肖像权等行为。
除此之外,4月11日,我国也出台了第一份对生成式 AI 进行监管的文件,国家互联网信息办公室正式发布《生成式人工智能服务管理办法(征求意见稿)》,更是对此场景给出了极有针对性的预设规范——一些主播AI换脸女星带货,不再是“法无禁止即可为”。
作为普通人,面对AI换脸诈骗技术,有什么方法可以识别呢?
AI换脸诈骗层出不穷,不过“AI假脸”必然会有一些瑕疵,可以通过细节来判断对方是否真是“AI假脸”。
专家提示称,“AI假脸”的纹理特征存在破绽。例如,伪造后的视频人物的眼睛或牙齿轮廓细节容易不一致;两只眼睛瞳孔的颜色不一样或瞳孔中心反射的细节不一样;或是很多伪造视频由于视频分辨率低于原始视频分辨率,伪造出来视频的牙齿边缘过于整齐等。
另外,“AI假脸”有可能不符合正常人的生理特征,比如,一个健康成年人一般间隔2-10秒眨一次眼,每次眨眼用时0.1-0.4秒,而在伪造视频中,人的眨眼频率可能不符合上述规律。
最后,由于人嘴部的运动是最为频繁且快速的,因此AI软件无法真实准确地渲染连续动作。因此,“AI假脸”的嘴部特征有可能辨别出真假。此外,伪造后的视频会造成一定的视频抖动,导致视频出现帧间不一致情况。
中国互联网协会日前也对“AI换脸”新骗局做出提醒,远程转账务必需要多重验证,把好“钱袋子”。如果有人自称“家人”“朋友”等诱导你转账汇款,务必第一时间提高警惕。在转账汇款、资金往来这样的典型场景,要通过回拨对方手机号等额外通信方式核实确认,不要仅凭单一沟通渠道未经核实就直接转账汇款,无论对方是谁。
Executive Editor: Sonia YU
Editor: LI Yanxia
Host: Stephanie LI
Writer: Stephanie LI
Sound Editor: Stephanie LI
Graphic Designer: ZHENG Wenjing, LIAO Yuanni
Produced by 21st Century Business Herald Dept. of Overseas News.
Presented by SFC
编委: 于晓娜
策划、编辑:李艳霞
播音:李莹亮
撰稿:李莹亮
音频制作:李莹亮
设计:郑文静、廖苑妮
21世纪经济报道海外部 制作
南方财经全媒体集团 出品