spark分组排序提取前N个值

求教各位大神:
本人用scalas+spark开发,用RDD实现以下需求时遇到困难!
数据:
用户 位置 天数
user1 L1 28
user1 L2 20
user1 L3 15
user2 L1 30
user2 L2 15
user3 L5 3
user3 L6 18
user4 L7 4
通过spark RDD怎样实现按用户分组提取每个用户天数最大的位置

希望数据结果:
RDD:
array((user1,L1,28),(user2,L1,30),(user3 , L6,18),(user4,,7 4))
这里主体是根据用户分组计算最大天数,并把位置带出来,研究半天无果,求大神指教

https://blog.csdn.net/accptanggang/article/details/52926138

您发的链接我看了 只能算出用户和最大天数RDD(user,daycount),位置提不出来!!

谢谢 大家 问题我找到解决办法了:
import org.apache.spark.{SparkConf, SparkContext}

val rdd3 = sc.makeRDD(Array(("a","user1","25"),("b","user1","27"),("c","user1","12"),("d","user2","23"),("e","user2","1"),("a","user3","30")),2).map(x => {
val lac = x._1
val user = x._2
val cnt = x._3
(user.toString,lac,cnt.toInt)
}).take(6)

val topK=rdd3.groupBy(item=>(item._1)).map(subG=>{
val (usera) = subG._1
val dayTop1=subG._2.toList.sortBy(_._3)(Ordering.Int.reverse).take(1).map(item=>item._2+","+item._1+","+item._3)
(usera,dayTop1)
})

https://blog.csdn.net/rlhua/article/details/14222125